Skip to content

Commit 9fe0696

Browse files
dmeoliantmarakis
authored andcommitted
fixed typos (#1118)
* changed queue to set in AC3 Changed queue to set in AC3 (as in the pseudocode of the original algorithm) to reduce the number of consistency-check due to the redundancy of the same arcs in queue. For example, on the harder1 configuration of the Sudoku CSP the number consistency-check has been reduced from 40464 to 12562! * re-added test commented by mistake * added the mentioned AC4 algorithm for constraint propagation AC3 algorithm has non-optimal worst case time-complexity O(cd^3 ), while AC4 algorithm runs in O(cd^2) worst case time * added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference * removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py * added map coloring SAT problems * fixed typo errors and removed unnecessary brackets * reformulated the map coloring problem * Revert "reformulated the map coloring problem" This reverts commit 20ab0e5. * Revert "fixed typo errors and removed unnecessary brackets" This reverts commit f743146. * Revert "added map coloring SAT problems" This reverts commit 9e0fa55. * Revert "removed useless doctest for AC4 in Sudoku because AC4's tests are already present in test_csp.py" This reverts commit b3cd24c. * Revert "added doctest in Sudoku for AC4 and and the possibility of choosing the constant propagation algorithm in mac inference" This reverts commit 6986247. * Revert "added the mentioned AC4 algorithm for constraint propagation" This reverts commit 03551fb. * added map coloring SAT problem * fixed build error * Revert "added map coloring SAT problem" This reverts commit 93af259. * Revert "fixed build error" This reverts commit 6641c2c. * added map coloring SAT problem * removed redundant parentheses * added Viterbi algorithm * added monkey & bananas planning problem * simplified condition in search.py * added tests for monkey & bananas planning problem * removed monkey & bananas planning problem * Revert "removed monkey & bananas planning problem" This reverts commit 9d37ae0. * Revert "added tests for monkey & bananas planning problem" This reverts commit 24041e9. * Revert "simplified condition in search.py" This reverts commit 6d229ce. * Revert "added monkey & bananas planning problem" This reverts commit c74933a. * defined the PlanningProblem as a specialization of a search.Problem & fixed typo errors * fixed doctest in logic.py * fixed doctest for cascade_distribution * added ForwardPlanner and tests * added __lt__ implementation for Expr * added more tests * renamed forward planner * Revert "renamed forward planner" This reverts commit c4139e5. * renamed forward planner class & added doc * added backward planner and tests * fixed mdp4e.py doctests * removed ignore_delete_lists_heuristic flag * fixed heuristic for forward and backward planners * added SATPlan and tests * fixed ignore delete lists heuristic in forward and backward planners * fixed backward planner and added tests * updated doc * added nary csp definition and examples * added CSPlan and tests * fixed CSPlan * added book's cryptarithmetic puzzle example * fixed typo errors in test_csp * fixed #1111 * added sortedcontainers to yml and doc to CSPlan * added tests for n-ary csp * fixed utils.extend * updated test_probability.py * converted static methods to functions * added AC3b and AC4 with heuristic and tests * added conflict-driven clause learning sat solver * added tests for cdcl and heuristics * fixed probability.py * fixed import * fixed kakuro * added Martelli and Montanari rule-based unification algorithm * removed duplicate standardize_variables * renamed variables known as built-in functions * fixed typos in learning.py * renamed some files and fixed typos * fixed typos * fixed typos * fixed tests * removed unify_mm * remove unnecessary brackets * fixed tests * moved utility functions to utils.py
1 parent 255a160 commit 9fe0696

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1856
-1613
lines changed

agents.py

+41-22
Original file line numberDiff line numberDiff line change
@@ -113,9 +113,11 @@ def new_program(percept):
113113
action = old_program(percept)
114114
print('{} perceives {} and does {}'.format(agent, percept, action))
115115
return action
116+
116117
agent.program = new_program
117118
return agent
118119

120+
119121
# ______________________________________________________________________________
120122

121123

@@ -130,6 +132,7 @@ def program(percept):
130132
percepts.append(percept)
131133
action = table.get(tuple(percepts))
132134
return action
135+
133136
return program
134137

135138

@@ -146,26 +149,31 @@ def RandomAgentProgram(actions):
146149
"""
147150
return lambda percept: random.choice(actions)
148151

152+
149153
# ______________________________________________________________________________
150154

151155

152156
def SimpleReflexAgentProgram(rules, interpret_input):
153157
"""This agent takes action based solely on the percept. [Figure 2.10]"""
158+
154159
def program(percept):
155160
state = interpret_input(percept)
156161
rule = rule_match(state, rules)
157162
action = rule.action
158163
return action
164+
159165
return program
160166

161167

162168
def ModelBasedReflexAgentProgram(rules, update_state, model):
163169
"""This agent takes action based on the percept and state. [Figure 2.12]"""
170+
164171
def program(percept):
165172
program.state = update_state(program.state, program.action, percept, model)
166173
rule = rule_match(program.state, rules)
167174
action = rule.action
168175
return action
176+
169177
program.state = program.action = None
170178
return program
171179

@@ -176,6 +184,7 @@ def rule_match(state, rules):
176184
if rule.matches(state):
177185
return rule
178186

187+
179188
# ______________________________________________________________________________
180189

181190

@@ -205,8 +214,7 @@ def TableDrivenVacuumAgent():
205214
((loc_B, 'Clean'), (loc_A, 'Dirty')): 'Suck',
206215
((loc_B, 'Dirty'), (loc_B, 'Clean')): 'Left',
207216
((loc_A, 'Dirty'), (loc_A, 'Clean'), (loc_B, 'Dirty')): 'Suck',
208-
((loc_B, 'Dirty'), (loc_B, 'Clean'), (loc_A, 'Dirty')): 'Suck'
209-
}
217+
((loc_B, 'Dirty'), (loc_B, 'Clean'), (loc_A, 'Dirty')): 'Suck'}
210218
return Agent(TableDrivenAgentProgram(table))
211219

212220

@@ -219,6 +227,7 @@ def ReflexVacuumAgent():
219227
>>> environment.status == {(1,0):'Clean' , (0,0) : 'Clean'}
220228
True
221229
"""
230+
222231
def program(percept):
223232
location, status = percept
224233
if status == 'Dirty':
@@ -227,6 +236,7 @@ def program(percept):
227236
return 'Right'
228237
elif location == loc_B:
229238
return 'Left'
239+
230240
return Agent(program)
231241

232242

@@ -253,8 +263,10 @@ def program(percept):
253263
return 'Right'
254264
elif location == loc_B:
255265
return 'Left'
266+
256267
return Agent(program)
257268

269+
258270
# ______________________________________________________________________________
259271

260272

@@ -392,22 +404,22 @@ def __add__(self, heading):
392404
True
393405
"""
394406
if self.direction == self.R:
395-
return{
407+
return {
396408
self.R: Direction(self.D),
397409
self.L: Direction(self.U),
398410
}.get(heading, None)
399411
elif self.direction == self.L:
400-
return{
412+
return {
401413
self.R: Direction(self.U),
402414
self.L: Direction(self.D),
403415
}.get(heading, None)
404416
elif self.direction == self.U:
405-
return{
417+
return {
406418
self.R: Direction(self.R),
407419
self.L: Direction(self.L),
408420
}.get(heading, None)
409421
elif self.direction == self.D:
410-
return{
422+
return {
411423
self.R: Direction(self.L),
412424
self.L: Direction(self.R),
413425
}.get(heading, None)
@@ -462,7 +474,7 @@ def things_near(self, location, radius=None):
462474
radius2 = radius * radius
463475
return [(thing, radius2 - distance_squared(location, thing.location))
464476
for thing in self.things if distance_squared(
465-
location, thing.location) <= radius2]
477+
location, thing.location) <= radius2]
466478

467479
def percept(self, agent):
468480
"""By default, agent perceives things within a default radius."""
@@ -476,11 +488,11 @@ def execute_action(self, agent, action):
476488
agent.direction += Direction.L
477489
elif action == 'Forward':
478490
agent.bump = self.move_to(agent, agent.direction.move_forward(agent.location))
479-
# elif action == 'Grab':
480-
# things = [thing for thing in self.list_things_at(agent.location)
481-
# if agent.can_grab(thing)]
482-
# if things:
483-
# agent.holding.append(things[0])
491+
# elif action == 'Grab':
492+
# things = [thing for thing in self.list_things_at(agent.location)
493+
# if agent.can_grab(thing)]
494+
# if things:
495+
# agent.holding.append(things[0])
484496
elif action == 'Release':
485497
if agent.holding:
486498
agent.holding.pop()
@@ -505,7 +517,7 @@ def move_to(self, thing, destination):
505517
def add_thing(self, thing, location=(1, 1), exclude_duplicate_class_items=False):
506518
"""Add things to the world. If (exclude_duplicate_class_items) then the item won't be
507519
added if the location has at least one item of the same class."""
508-
if (self.is_inbounds(location)):
520+
if self.is_inbounds(location):
509521
if (exclude_duplicate_class_items and
510522
any(isinstance(t, thing.__class__) for t in self.list_things_at(location))):
511523
return
@@ -521,7 +533,7 @@ def random_location_inbounds(self, exclude=None):
521533
location = (random.randint(self.x_start, self.x_end),
522534
random.randint(self.y_start, self.y_end))
523535
if exclude is not None:
524-
while(location == exclude):
536+
while location == exclude:
525537
location = (random.randint(self.x_start, self.x_end),
526538
random.randint(self.y_start, self.y_end))
527539
return location
@@ -543,7 +555,7 @@ def add_walls(self):
543555
for x in range(self.width):
544556
self.add_thing(Wall(), (x, 0))
545557
self.add_thing(Wall(), (x, self.height - 1))
546-
for y in range(1, self.height-1):
558+
for y in range(1, self.height - 1):
547559
self.add_thing(Wall(), (0, y))
548560
self.add_thing(Wall(), (self.width - 1, y))
549561

@@ -574,6 +586,7 @@ class Obstacle(Thing):
574586
class Wall(Obstacle):
575587
pass
576588

589+
577590
# ______________________________________________________________________________
578591

579592

@@ -682,6 +695,7 @@ def __init__(self, coordinates):
682695
super().__init__()
683696
self.coordinates = coordinates
684697

698+
685699
# ______________________________________________________________________________
686700
# Vacuum environment
687701

@@ -691,7 +705,6 @@ class Dirt(Thing):
691705

692706

693707
class VacuumEnvironment(XYEnvironment):
694-
695708
"""The environment of [Ex. 2.12]. Agent perceives dirty or clean,
696709
and bump (into obstacle) or not; 2D discrete world of unknown size;
697710
performance measure is 100 for each dirt cleaned, and -1 for
@@ -710,7 +723,7 @@ def percept(self, agent):
710723
Unlike the TrivialVacuumEnvironment, location is NOT perceived."""
711724
status = ('Dirty' if self.some_things_at(
712725
agent.location, Dirt) else 'Clean')
713-
bump = ('Bump' if agent.bump else'None')
726+
bump = ('Bump' if agent.bump else 'None')
714727
return (status, bump)
715728

716729
def execute_action(self, agent, action):
@@ -729,7 +742,6 @@ def execute_action(self, agent, action):
729742

730743

731744
class TrivialVacuumEnvironment(Environment):
732-
733745
"""This environment has two locations, A and B. Each can be Dirty
734746
or Clean. The agent perceives its location and the location's
735747
status. This serves as an example of how to implement a simple
@@ -766,6 +778,7 @@ def default_location(self, thing):
766778
"""Agents start in either location at random."""
767779
return random.choice([loc_A, loc_B])
768780

781+
769782
# ______________________________________________________________________________
770783
# The Wumpus World
771784

@@ -775,6 +788,7 @@ class Gold(Thing):
775788
def __eq__(self, rhs):
776789
"""All Gold are equal"""
777790
return rhs.__class__ == Gold
791+
778792
pass
779793

780794

@@ -824,6 +838,7 @@ def can_grab(self, thing):
824838

825839
class WumpusEnvironment(XYEnvironment):
826840
pit_probability = 0.2 # Probability to spawn a pit in a location. (From Chapter 7.2)
841+
827842
# Room should be 4x4 grid of rooms. The extra 2 for walls
828843

829844
def __init__(self, agent_program, width=6, height=6):
@@ -949,7 +964,7 @@ def execute_action(self, agent, action):
949964
"""The arrow travels straight down the path the agent is facing"""
950965
if agent.has_arrow:
951966
arrow_travel = agent.direction.move_forward(agent.location)
952-
while(self.is_inbounds(arrow_travel)):
967+
while self.is_inbounds(arrow_travel):
953968
wumpus = [thing for thing in self.list_things_at(arrow_travel)
954969
if isinstance(thing, Wumpus)]
955970
if len(wumpus):
@@ -979,12 +994,13 @@ def is_done(self):
979994
print("Death by {} [-1000].".format(explorer[0].killed_by))
980995
else:
981996
print("Explorer climbed out {}."
982-
.format(
983-
"with Gold [+1000]!" if Gold() not in self.things else "without Gold [+0]"))
997+
.format(
998+
"with Gold [+1000]!" if Gold() not in self.things else "without Gold [+0]"))
984999
return True
9851000

986-
9871001
# TODO: Arrow needs to be implemented
1002+
1003+
9881004
# ______________________________________________________________________________
9891005

9901006

@@ -1016,13 +1032,16 @@ def test_agent(AgentFactory, steps, envs):
10161032
>>> result == 5
10171033
True
10181034
"""
1035+
10191036
def score(env):
10201037
agent = AgentFactory()
10211038
env.add_thing(agent)
10221039
env.run(steps)
10231040
return agent.performance
1041+
10241042
return mean(map(score, envs))
10251043

1044+
10261045
# _________________________________________________________________________
10271046

10281047

0 commit comments

Comments
 (0)