Final Exam Study Guide
CS 396
Final Exam: things you should review for this exam
The final exam is comprehensive, meaning it covers everything we've learned in this class. That means I'll expect (a) a somewhat less specific "take-away knowledge" level of expertise on all topics we've covered, plus (b) will expect to test recent (untested) material in a bit more detail. Here are a few specific sorts of things you might want to know about. This is not meant to be a comprehensive or complete list...just an idea of what sorts of issues could appear.
- Principles of language design: as discussed presented in lecture
- Do you know how to rationally critique a programming language design?
- Can you define all of the principles we laid out...and explain each...and recognize or give examples of each in action?
- Remember that we defined our OWN framework of principles, i.e., rather than using Sebesta's rather sparse two principles. Know and use the ones we presented in lecture!
-
- Language history: knowing where we come from
- What is the relationship between languages and programming domains? What domains can we recognize? How do they tend to influence the languages that fall in them?
- Know the landmark languages we discussed, some of the key features of each. Assembler, Fortran, Algol, Simula, Smalltalk, Lisp, Prolog, Haskell, C/C++, Java, Ada, etc.
- How are different languages related? Which ones evolved from or influenced which? Sebesta pg.41
- I don't care about dates (beyond rough order), I care about influences: what, why, how did it work out?
What aspects of various languages succeeded and were adopted? What aspects failed? What was the fate of various languages? Why?
-
- Syntax: specifying static structure
- How does syntactic analysis fit as a step in the compilation process? The interpretation process? Do you understand the steps in the interpretation/compilation processes well?
- How can I describe the syntactic structure of a language? Grammars, syntax graphs. Can you really USE them in an example?
-
- Semantics: Specifying computational behavior. The three approaches we discussed.
- What is the basic approach/philosophy of each?
- What are the pros/cons/issues with each approach? Which approach is useful and how/where?
-
- Object Oriented Paradigm: procedural + data abstraction in one
- How does OOP fit into the development of programming languages, i.e., how does it augment abstractions that have been evolving?
- What are key features of the object oriented paradigm?
- What languages introduced this concept? How? How did it evolve over time to its current state?
- What are problems/issues in the design of OOP languages?
-
- Functional Languages
- What is the motivating concept of "functional programming"? How does it fit into the big picture of "the grand programming enterprise"?
- What theoretical basis does this approach have? What's new and different with respect to other paradigms?
What have been the most important languages in this area?
- What particular or unusual features did some of the key languages in this area explore? How those features work? Were they successful?
-
- Names, Types and Scope
- What characteristics are associated with the concept of a variable? Can you explain each of them?
- How does storage allocation work? What's the relationship between storage and it's name? Aliasing?
- What the heck is "binding"? Managing/binding of type and storage are probably among the most interesting discussions here, but we also talking about "binding time" of other decisions, e.g., the set of legal names that could be used. You should understand the concept of binding, the various binding times that are possible, and how these concepts apply to the making (binding) of various decisions in the design of a programming language.
- We keep coming back to "typing" in languages: when determined, strong or weak, consequences.
- We had a fun discussion of scoping and some quick examples...but do you really get it? Could you analyze the behavior of a code example under dynamic/static scoping?
-
- Logic languages
- How do logic languages work...compared to other paradigms. What is their vision/goal? How do they approach it? How successfully?
- Should know a bit of background on FOPL...predicates, propositions, inference concepts.
- How does Prolog work? That whole "flow of satisfaction" discussion. Can you trace out the process of prolog proving a goal?
-
- Subprograms
- What is the invocation/return process? Can you sketch the stack at various stages of program invocation?
- Know the different param passing models we looked at. How do they work? Could you simulate what happens under different passing modes in some language?
- Could you sketch out memory and/or the system stack (as done in lecture) for various param passing modes?
-
Programming: Can you write a basic function in Scheme/Prolog?
- Learning to program/think in Scheme (a novel way of thinking about programming) was a major objective of this class (meeting an accreditation requirement too).