This is an in-memory implementation of a proposed API for deriving copies in a federated OER Repository.
It also serves as an example for communicating with services that "Transform" the content to other formats (like PDF or EPUB)
To start it up you'll need nodejs and the node package manager (npm).
To download the dependencies:
npm install --dev . # --dev will install the dev dependencies like doc generation and test runners
And, to start it up:
node bin/server.js --debug-user # So you don't need to authenticate via OpenID
Or, for more command line options (like specifying a PDF generation script):
node bin/server.js -h
Then, point your browser to the admin interface at http://localhost:3000/index.html
If you are running this on an Internet-facing webserver then you will also need to specify the OpenID Domain (localhost
is the default) as a command-line argument.
node bin/server.js -u "http://example.com:3000"
You can debug the server by installing node-inspector and running:
npm install -g node-inspector # installs node-inspector
node --debug bin/server.js &
node-inspector &
# Point your browser to http://localhost:8080
Check out the documentation
Or, make it yourself by running:
./node_modules/.bin/docco src/*.coffee
A SET
can be one of:
- content
- resources
- drafts (unpublished content)
- pdfs
- epubs
- zips
- users (Member profile pages)
Every SET
has the following operations:
-
POST /{SET}
The parameters depend on the set but this always returns ahref
to a Promise -
GET /{SET}
Returns a list of ids. This can be filtered using query parameters -
GET /{SET}/{id}
Depending on the SET,id
is either aUUID
or aUUID@ver
. This returns either a 202, 404, or 200 status:- 202: The response is a JSON Promise
- 200: The response is the actual content
-
GET /{SET}/{id}.promise
Always returns the promise (so you can see when it completed and the messages)
With these common URLs we can monitor all the services using the same Javascript code (just point it to SET
).
These match up with the Backbone.js sync()
call on models (each Backbone.Model
is a SET
)
Every operation can have an optional api-key
parameter that specifies which key the user is using.
Each SET
may have additional operations
{id}
is a string of the form {UUID}@{ver}
GET /content/{id}.json
returns a JSON document of the metadata (keywords, roles, language, etc)GET /content/{UUID}
redirects to/content/{id}
GET /content/{UUID}/latest/
redirects to/content/{id}
GET /content/{UUID}/{ver}/
redirects to/content/{UUID}@{ver}
See the Publishing section below for thoughts on publishing and how this layout fits nicely with EPUBs
I'm not married to {id}.json
. It could be {id}.metadata
but we might want to also provide autogenerated metadata like:
- a list of resources/content this piece links to
- a list of UUIDs that are derived from this content
- a list of UUIDs that uses this content
- dictionary of URLs to all transforms
To distinguish between unpublished and published content I chose the word "drafts".
PUT /drafts/{id}
allows changing the HTMLGET/PUT /drafts/{id}.json
allows changing the metadata
The id
could be an autogenerated UUID
if the content has not been published or the UUID
(excluding the version) of the published content.
Because of JSONs syntax we can use the same URL to update all the metadata (Backbone gives us this for free).
- If a key is not included in the JSON it is not updated
- If a value is an array (ie roles or keywords) the roles are replaced
- If a value is
null
then that field is unset (not sure there's a use for this case)
The POST contains the following parameters:
- a
href
to the content that is used to generate the PDF (ie/content/m9003@12
) - an optional
style
parameter that says which CSS to use
For simplicity, the POST response id
could match up with the id
specified in the POST (m9003@12
).
To enable previewing, the href
param could be a remote URL.
These are the profile pages and the id matches the id's used in the roles metadata.
PUT /users/{id}
allows a user to update their info (authenticated).
-
Links to other modules or resources don't need funky '..' (the HTML for
/content/m9003
is the same as/content/m9003/1.2
and the same for/content/m9003/1.2/
) -
The EPUB/ZIP files clearly denote which version of a document is in the EPUB
(To support editions we could allow the user to specify a version but use a number/hash by default)
(dangling off the same spot and having a similar contract)
- The EPUB structure mimics the repo (less HTML rewriting)
- Monitoring just requires pointing to the set
A lens/filter could be defined as another set
Backbone.js already lets us know which attributes of a model have been updated. If the model is all the metadata then PUTting the set of changed attributes is all that's needed.
A publish event takes in a ZIP file. I propose the structure of the zip file line up with:
- the structure of the EPUB file we generate
- the structure of the published repository
By doing this, there is:
- less HTML rewriting
- uniform expectations in our codebase
- The EPUB+ file becomes roundtrippable
- Any EPUB file becomes publishable
Here's the combined Publish Zip and EPUB3 file structure
grep "epub"
for the minimal set of files required in the EPUBgrep -v "epub"
for minimal set of files required for the publish zip
And the ZIP file would look like this:
/toc.epub.html
/index.epub.html # other autogenerated files for EPUB
/autogen-000x.epub.html # start/end-of-chapter material (not a module)
# linked to by the EPUB spine file
/content/{ID_collection}@{VER}.html # source collection file
/content/{ID_collection}@{VER}.json # metadata on the collection
/content/{ID_1}@{VER}.html # source code for the module
/content/{ID_1}@{VER}.json # JSON metadata for the module
/content/{ID_1}@{VER}.epub.html # The EPUB3 version of the module
/content/{ID_2}@{VER}.html
/content/{ID_2}@{VER}.epub.html
/.../blah1.html # A new piece of content (could be linked from {ID_1}.html
/.../blah1.json # JSON metadata for that content
/resources/{ID}.svg # source of the resource (linked from {ID_2}.html)
/resources/{ID}.json # metadata (if we decide to have this for resources)
/resources/{ID}.epub.jpg # EPUB3 version (resized, rasterized, etc)
# (linked from {ID_2}.epub.html)
To convert a module source to the EPUB, the transform would:
- rename all local links from
<a href="{ID}">
to<a href="{ID}.epub.html">
- rename all links to resources from
<img src="../resources/{ID}">
to<img src="../resources/{ID}.epub.png">
If we drop the .html
extension on source files the only case for rewriting links would be for new content (if new content used an unused UUID then we wouldn't need to rewrite at all; just verify the user can create/update that UUID).
The logic for importing/publishing would be:
- All
.epub.*
files are ignored - All
.html
files whose name is not a valid ID in the repo are treated as new content - All
{ID}@{VER}.html/.json
files create a new version of the content (if you have permissions and the checksum has changed)