Skip to content

Commit 6e12df4

Browse files
antmarakisnorvig
authored andcommitted
Pseudocode In Notebooks (aimacode#616)
* Update knowledge.ipynb * Update notebook.py * Update knowledge.ipynb * bringing it all together in notebook
1 parent 9974841 commit 6e12df4

File tree

2 files changed

+127
-75
lines changed

2 files changed

+127
-75
lines changed

knowledge.ipynb

+103-72
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,9 @@
1919
},
2020
"outputs": [],
2121
"source": [
22-
"from knowledge import *"
22+
"from knowledge import *\n",
23+
"\n",
24+
"from notebook import pseudocode, psource"
2325
]
2426
},
2527
{
@@ -70,7 +72,7 @@
7072
"collapsed": true
7173
},
7274
"source": [
73-
"## [CURRENT-BEST LEARNING](https://github.com/aimacode/aima-pseudocode/blob/master/md/Current-Best-Learning.md)\n",
75+
"## CURRENT-BEST LEARNING\n",
7476
"\n",
7577
"### Overview\n",
7678
"\n",
@@ -89,46 +91,70 @@
8991
"cell_type": "markdown",
9092
"metadata": {},
9193
"source": [
92-
"### Implementation\n",
93-
"\n",
94-
"As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n",
95-
"\n",
96-
"We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n",
97-
"\n",
98-
"You can read the source by running the cells below:"
94+
"### Pseudocode"
9995
]
10096
},
10197
{
10298
"cell_type": "code",
10399
"execution_count": 2,
104-
"metadata": {
105-
"collapsed": true
106-
},
107-
"outputs": [],
100+
"metadata": {},
101+
"outputs": [
102+
{
103+
"data": {
104+
"text/markdown": [
105+
"### AIMA3e\n",
106+
"__function__ Current-Best-Learning(_examples_, _h_) __returns__ a hypothesis or fail \n",
107+
" __if__ _examples_ is empty __then__ \n",
108+
"   __return__ _h_ \n",
109+
" _e_ ← First(_examples_) \n",
110+
" __if__ _e_ is consistent with _h_ __then__ \n",
111+
"   __return__ Current-Best-Learning(Rest(_examples_), _h_) \n",
112+
" __else if__ _e_ is a false positive for _h_ __then__ \n",
113+
"   __for each__ _h'_ __in__ specializations of _h_ consistent with _examples_ seen so far __do__ \n",
114+
"     _h''_ ← Current-Best-Learning(Rest(_examples_), _h'_) \n",
115+
"     __if__ _h''_ ≠ _fail_ __then return__ _h''_ \n",
116+
" __else if__ _e_ is a false negative for _h_ __then__ \n",
117+
"   __for each__ _h'_ __in__ generalizations of _h_ consistent with _examples_ seen so far __do__ \n",
118+
"     _h''_ ← Current-Best-Learning(Rest(_examples_), _h'_) \n",
119+
"     __if__ _h''_ ≠ _fail_ __then return__ _h''_ \n",
120+
" __return__ _fail_ \n",
121+
"\n",
122+
"---\n",
123+
"__Figure ??__ The current-best-hypothesis learning algorithm. It searches for a consistent hypothesis that fits all the examples and backtracks when no consistent specialization/generalization can be found. To start the algorithm, any hypothesis can be passed in; it will be specialized or generalized as needed."
124+
],
125+
"text/plain": [
126+
"<IPython.core.display.Markdown object>"
127+
]
128+
},
129+
"execution_count": 2,
130+
"metadata": {},
131+
"output_type": "execute_result"
132+
}
133+
],
108134
"source": [
109-
"%psource current_best_learning"
135+
"pseudocode('Current-Best-Learning')"
110136
]
111137
},
112138
{
113-
"cell_type": "code",
114-
"execution_count": 3,
115-
"metadata": {
116-
"collapsed": true
117-
},
118-
"outputs": [],
139+
"cell_type": "markdown",
140+
"metadata": {},
119141
"source": [
120-
"%psource specializations"
142+
"### Implementation\n",
143+
"\n",
144+
"As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n",
145+
"\n",
146+
"We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n",
147+
"\n",
148+
"You can read the source by running the cell below:"
121149
]
122150
},
123151
{
124152
"cell_type": "code",
125-
"execution_count": 4,
126-
"metadata": {
127-
"collapsed": true
128-
},
153+
"execution_count": null,
154+
"metadata": {},
129155
"outputs": [],
130156
"source": [
131-
"%psource generalizations"
157+
"psource(current_best_learning, specializations, generalizations)"
132158
]
133159
},
134160
{
@@ -432,7 +458,7 @@
432458
"cell_type": "markdown",
433459
"metadata": {},
434460
"source": [
435-
"## [VERSION-SPACE LEARNING](https://github.com/aimacode/aima-pseudocode/blob/master/md/Version-Space-Learning.md)\n",
461+
"## VERSION-SPACE LEARNING\n",
436462
"\n",
437463
"### Overview\n",
438464
"\n",
@@ -443,83 +469,88 @@
443469
},
444470
{
445471
"cell_type": "markdown",
446-
"metadata": {
447-
"collapsed": true
448-
},
449-
"source": [
450-
"### Implementation\n",
451-
"\n",
452-
"The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function `version_space_update`. In the end, we return the version-space.\n",
453-
"\n",
454-
"Before we can start updating the version space, we need to generate it. We do that with the `all_hypotheses` function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using `values_table`), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).\n",
455-
"\n",
456-
"You can read the code for all the functions by running the cells below:"
457-
]
458-
},
459-
{
460-
"cell_type": "code",
461-
"execution_count": 2,
462-
"metadata": {
463-
"collapsed": true
464-
},
465-
"outputs": [],
472+
"metadata": {},
466473
"source": [
467-
"%psource version_space_learning"
474+
"### Pseudocode"
468475
]
469476
},
470477
{
471478
"cell_type": "code",
472479
"execution_count": 3,
473-
"metadata": {
474-
"collapsed": true
475-
},
476-
"outputs": [],
480+
"metadata": {},
481+
"outputs": [
482+
{
483+
"data": {
484+
"text/markdown": [
485+
"### AIMA3e\n",
486+
"__function__ Version-Space-Learning(_examples_) __returns__ a version space \n",
487+
"&emsp;__local variables__: _V_, the version space: the set of all hypotheses \n",
488+
"\n",
489+
"&emsp;_V_ &larr; the set of all hypotheses \n",
490+
"&emsp;__for each__ example _e_ in _examples_ __do__ \n",
491+
"&emsp;&emsp;&emsp;__if__ _V_ is not empty __then__ _V_ &larr; Version-Space-Update(_V_, _e_) \n",
492+
"&emsp;__return__ _V_ \n",
493+
"\n",
494+
"---\n",
495+
"__function__ Version-Space-Update(_V_, _e_) __returns__ an updated version space \n",
496+
"&emsp;_V_ &larr; \\{_h_ &isin; _V_ : _h_ is consistent with _e_\\} \n",
497+
"\n",
498+
"---\n",
499+
"__Figure ??__ The version space learning algorithm. It finds a subset of _V_ that is consistent with all the _examples_."
500+
],
501+
"text/plain": [
502+
"<IPython.core.display.Markdown object>"
503+
]
504+
},
505+
"execution_count": 3,
506+
"metadata": {},
507+
"output_type": "execute_result"
508+
}
509+
],
477510
"source": [
478-
"%psource version_space_update"
511+
"pseudocode('Version-Space-Learning')"
479512
]
480513
},
481514
{
482-
"cell_type": "code",
483-
"execution_count": 4,
515+
"cell_type": "markdown",
484516
"metadata": {
485517
"collapsed": true
486518
},
487-
"outputs": [],
488519
"source": [
489-
"%psource all_hypotheses"
520+
"### Implementation\n",
521+
"\n",
522+
"The set of hypotheses is represented by a list and each hypothesis is represented by a list of dictionaries, each dictionary a disjunction. For each example in the given examples we update the version space with the function `version_space_update`. In the end, we return the version-space.\n",
523+
"\n",
524+
"Before we can start updating the version space, we need to generate it. We do that with the `all_hypotheses` function, which builds a list of all the possible hypotheses (including hypotheses with disjunctions). The function works like this: first it finds the possible values for each attribute (using `values_table`), then it builds all the attribute combinations (and adds them to the hypotheses set) and finally it builds the combinations of all the disjunctions (which in this case are the hypotheses build by the attribute combinations).\n",
525+
"\n",
526+
"You can read the code for all the functions by running the cells below:"
490527
]
491528
},
492529
{
493530
"cell_type": "code",
494-
"execution_count": 5,
495-
"metadata": {
496-
"collapsed": true
497-
},
531+
"execution_count": null,
532+
"metadata": {},
498533
"outputs": [],
499534
"source": [
500-
"%psource values_table"
535+
"psource(version_space_learning, version_space_update)"
501536
]
502537
},
503538
{
504539
"cell_type": "code",
505-
"execution_count": 6,
506-
"metadata": {
507-
"collapsed": true
508-
},
540+
"execution_count": null,
541+
"metadata": {},
509542
"outputs": [],
510543
"source": [
511-
"%psource build_attr_combinations"
544+
"psource(all_hypotheses, values_table)"
512545
]
513546
},
514547
{
515548
"cell_type": "code",
516-
"execution_count": 7,
517-
"metadata": {
518-
"collapsed": true
519-
},
549+
"execution_count": null,
550+
"metadata": {},
520551
"outputs": [],
521552
"source": [
522-
"%psource build_h_combinations"
553+
"psource(build_attr_combinations, build_h_combinations)"
523554
]
524555
},
525556
{

notebook.py

+24-3
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
from games import TicTacToe, alphabeta_player, random_player, Fig52Extended, infinity
55
from logic import parse_definite_clause, standardize_variables, unify, subst
66
from learning import DataSet
7-
from IPython.display import HTML, Markdown, display
7+
from IPython.display import HTML, display
88
from collections import Counter
99

1010
import matplotlib.pyplot as plt
@@ -17,11 +17,32 @@
1717
#______________________________________________________________________________
1818

1919

20+
def pseudocode(algorithm):
21+
"""Print the pseudocode for the given algorithm."""
22+
from urllib.request import urlopen
23+
from IPython.display import Markdown
24+
25+
url = "https://raw.githubusercontent.com/aimacode/aima-pseudocode/master/md/{}.md".format(algorithm)
26+
f = urlopen(url)
27+
md = f.read().decode('utf-8')
28+
md = md.split('\n', 1)[-1].strip()
29+
md = '#' + md
30+
return Markdown(md)
31+
32+
2033
def psource(*functions):
2134
"""Print the source code for the given function(s)."""
22-
import inspect
35+
source_code = '\n\n'.join(getsource(fn) for fn in functions)
36+
try:
37+
from pygments.formatters import HtmlFormatter
38+
from pygments.lexers import PythonLexer
39+
from pygments import highlight
40+
41+
display(HTML(highlight(source_code, PythonLexer(), HtmlFormatter(full=True))))
42+
43+
except ImportError:
44+
print(source_code)
2345

24-
print('\n\n'.join(inspect.getsource(fn) for fn in functions))
2546

2647
# ______________________________________________________________________________
2748

0 commit comments

Comments
 (0)