CS470/570 Intro to Artificial Intelligence

Chapter 2: Written Homework

Instructions: Answers to the questions below must be presented in hardcopy, on the due date noted on the course website. All submissions must have solutions in the order listed here, and clearly labeled. A stapled submission with a neatly printing cover sheet is required. Typed answers are preferable, but clear handwritten work is acceptable.

 

The Problems, from end of Chapter:

2.3 a,b,c,h,i : The true/false part is worthless without clear explanation in support of your answer, i.e., explaining carefully why true/false, describing a task environment that illustrates your point, etc.

 

2.4 a, e, g (with 'a' being the top bullet): For the PEAS description, given a detailed description including both features/attributes and short sentences explaining what they do/why you are including them. For the task environment characterization, briefly explain your choice for each of the categories. As we have seen in class, how you decide to classify an environment along a dimension is sometimes dependent on the assumptions you are making, or how you're seeing it.

 

2.10, all parts. Make the assumption that the vacuum world is now a 2-D space, with up/down/right/left movements possible. And add a little modification to put a little more teeth into the performance measure proposed: the agent will also get some number of points (details irrellevant) when it sucks up some dirt. So it's not just minimizing movement, but trying to get the most cleaning for the least movement. Reminder: the agent has only the same percepts as in the book: its current location and whether there is dirt there.

For part(b), when it says "design an agent", what this means is to provide a description of it, i.e., exactly what "state" the agent would minimally need and how it would use this to compute its actions. This would minimally mean a clear indication of the data structures/state it would use, plus a clear description of how an action is computed based on <state> + P.

 

2.DrD-1: Let's think some more about the world in exercise 2.10. The agent has the same percepts as it had in Problem 2.10(b): its current location, whether there is dirt there, and some record of where it has been.

  1. First write out our general definition of rational action. Now refine this to apply to this context: What is rational action defined as in this world? How would the agent go about computing it?
  2. Now let's make the fairly realistic assumption that dirt tends to accumulate in some regions of a room more than others. To translate into geek: Assume the underlying world model actually has a weighting on each square, defined by a 2-D continuous function, that shapes where dirt is placed each time the world is reset for another run. This just means that dirt can appear anywhere...but has a higher liklihood of appearing in some areas than others. Of course, the poor vacuum cleaner agent you designed in 2.10(b) has no clue here...it is just running the same program all the time. Re-do your design, adding a learning component on top of your previous agent. You'll need to describe how it will function overall, and then gives details about what each of the key modules of a learning agent is doing in this particular case. The idea is that your agent should learn to become more efficient over time.