You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that #53 is more formally publishing a website, additional measures would be a nice addition to achieve (and keep) high quality text, and reduce review time.
I don't know if these would be things that would need their own JEP, or are just nuts-and-bolts that fall out of (and would need documenting anwyay) in #29.
Notionally these would be run on every PR push.
spelling
tool: hunspell
There are a bunch of other ones, but hunspell, can be installed via most system package manager or conda. windows is less rosy.
I've found it best set up to test against generated HTML, where the semantics are more clear than markdown.... though would be happy to hear if anyone has something better.
While somewhat odius, maintaining a single dictionary.txt (just a sorted list of words, or a cleverer, but less portable stem-aware confection) across a corpus is good for keeping jargon and names consistent, e.g. JupyterHub vs Jupyterhub.
This also interestingly adds some more punch to the "new terms" part of the template.
Used on other jupyter projects, though other options exist. We recently added some caching stuff to try to reduce the amount of rate limiting that was occurring for some public sites. The in-document anchor checking doesn't quite work for sphinx output, but a fix has been merged, and could probably be released.
The text was updated successfully, but these errors were encountered:
My 2 cents about scope: IMO these are all in-scope from #29 as implementations of the phrasing that there should be a public website where JEPs are displayed.
Now that #53 is more formally publishing a website, additional measures would be a nice addition to achieve (and keep) high quality text, and reduce review time.
I don't know if these would be things that would need their own JEP, or are just nuts-and-bolts that fall out of (and would need documenting anwyay) in #29.
Notionally these would be run on every PR push.
spelling
tool:
hunspell
There are a bunch of other ones, but hunspell, can be installed via most system package manager or conda. windows is less rosy.
I've found it best set up to test against generated HTML, where the semantics are more clear than markdown.... though would be happy to hear if anyone has something better.
While somewhat odius, maintaining a single
dictionary.txt
(just a sorted list of words, or a cleverer, but less portable stem-aware confection) across a corpus is good for keeping jargon and names consistent, e.g.JupyterHub vs Jupyterhub
.This also interestingly adds some more punch to the "new terms" part of the template.
links
tool: https://github.com/jupyterlab/pytest-check-links
Used on other jupyter projects, though other options exist. We recently added some caching stuff to try to reduce the amount of rate limiting that was occurring for some public sites. The in-document anchor checking doesn't quite work for sphinx output, but a fix has been merged, and could probably be released.
The text was updated successfully, but these errors were encountered: