Skip to content

Switch docs to jupyter-execute sphinx extension for HTML reprs #10383

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Jun 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
cf78cf8
switch user-guide from ipython sphinx extenstion to jupyter-execute
scottyhq Jun 1, 2025
b889477
switch internals to jupyter-execute
scottyhq Jun 1, 2025
beacea8
switch remain doc files to jupyter-execute
scottyhq Jun 1, 2025
3b111d9
manual review of data model section
scottyhq Jun 2, 2025
58a5406
manual review core-operations
scottyhq Jun 2, 2025
8765673
manual review of IO
scottyhq Jun 2, 2025
0478087
manual review of plotting
scottyhq Jun 2, 2025
7ec6b56
manual review of interoperability
scottyhq Jun 2, 2025
00aad0b
review domain-specific and testing
scottyhq Jun 2, 2025
e380019
review outputs in internals section
scottyhq Jun 3, 2025
84a7976
fully remove ipython directive
scottyhq Jun 3, 2025
cd35a8d
handle execution warnings in time-coding
scottyhq Jun 3, 2025
420e7fb
use zarr v2 and consolidated=False to silence execution warnings
scottyhq Jun 3, 2025
0f36da2
cleanup, handle more warnings for RTD build
scottyhq Jun 3, 2025
07886df
catch cartopy coastline download warning
scottyhq Jun 3, 2025
6e56985
catch downloading 50m coastline warning too
scottyhq Jun 3, 2025
3de4b5e
silence xmode minimal printouts, more compact numpy printout
scottyhq Jun 3, 2025
22edd95
dont execute code in whatsnew
scottyhq Jun 3, 2025
aeaeff6
fix dark mode for datatrees
scottyhq Jun 4, 2025
673ce44
fix mermaid diagram, kerchunk, ncdata, and zarr sections
scottyhq Jun 4, 2025
fe4ed54
use tree command to check local zarrs
scottyhq Jun 4, 2025
8ca7819
address time-coding review
scottyhq Jun 4, 2025
dea0c23
Merge branch 'main' into jupyter-sphinx
dcherian Jun 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ __pycache__
doc/*.nc
doc/auto_gallery
doc/rasm.zarr
doc/savefig

# C extensions
*.so
Expand Down
4 changes: 1 addition & 3 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,6 @@
"sphinx.ext.extlinks",
"sphinx.ext.mathjax",
"sphinx.ext.napoleon",
"IPython.sphinxext.ipython_directive",
"IPython.sphinxext.ipython_console_highlighting",
"jupyter_sphinx",
"nbsphinx",
"sphinx_autosummary_accessors",
Expand Down Expand Up @@ -213,7 +211,7 @@

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build", "**.ipynb_checkpoints"]
exclude_patterns = ["_build", "debug.ipynb", "**.ipynb_checkpoints"]


# The name of the Pygments (syntax highlighting) style to use.
Expand Down
16 changes: 8 additions & 8 deletions doc/contribute/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -387,24 +387,24 @@ Some other important things to know about the docs:
for a detailed explanation, or look at some of the existing functions to
extend it in a similar manner.

- The tutorials make heavy use of the `ipython directive
<https://matplotlib.org/sampledoc/ipython_directive.html>`_ sphinx extension.
This directive lets you put code in the documentation which will be run
- The documentation makes heavy use of the `jupyter-sphinx extension
<https://jupyter-sphinx.readthedocs.io>`_.
The ``jupyter-execute`` directive lets you put code in the documentation which will be run
during the doc build. For example:

.. code:: rst

.. ipython:: python
.. jupyter-execute::

x = 2
x**3

will be rendered as::
will be rendered as:

In [1]: x = 2
.. jupyter-execute::

In [2]: x**3
Out[2]: 8
x = 2
x**3

Almost all code examples in the docs are run (and the output saved) during the
doc build. This approach means that code examples will always be up to date,
Expand Down
10 changes: 5 additions & 5 deletions doc/get-help/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
Frequently Asked Questions
==========================

.. ipython:: python
:suppress:
.. jupyter-execute::
:hide-code:

import numpy as np
import pandas as pd
Expand Down Expand Up @@ -101,22 +101,22 @@ Unfortunately, this means we sometimes have to explicitly cast our results from
xarray when using them in other libraries. As an illustration, the following
code fragment

.. ipython:: python
.. jupyter-execute::

arr = xr.DataArray([1, 2, 3])
pd.Series({"x": arr[0], "mean": arr.mean(), "std": arr.std()})

does not yield the pandas DataFrame we expected. We need to specify the type
conversion ourselves:

.. ipython:: python
.. jupyter-execute::

pd.Series({"x": arr[0], "mean": arr.mean(), "std": arr.std()}, dtype=float)

Alternatively, we could use the ``item`` method or the ``float`` constructor to
convert values one at a time

.. ipython:: python
.. jupyter-execute::

pd.Series({"x": arr[0].item(), "mean": float(arr.mean())})

Expand Down
9 changes: 8 additions & 1 deletion doc/internals/duck-arrays-integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,18 +70,25 @@ To avoid duplicated information, this method must omit information about the sha
:term:`dtype`. For example, the string representation of a ``dask`` array or a
``sparse`` matrix would be:

.. ipython:: python
.. jupyter-execute::

import dask.array as da
import xarray as xr
import numpy as np
import sparse

.. jupyter-execute::

a = da.linspace(0, 1, 20, chunks=2)
a

.. jupyter-execute::

b = np.eye(10)
b[[5, 7, 3, 0], [6, 8, 2, 9]] = 2
b = sparse.COO.from_numpy(b)
b

.. jupyter-execute::

xr.Dataset(dict(a=("x", a), b=(("y", "z"), b)))
14 changes: 9 additions & 5 deletions doc/internals/extending-xarray.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@
Extending xarray using accessors
================================

.. ipython:: python
:suppress:
.. jupyter-execute::
:hide-code:

import xarray as xr
import numpy as np


Xarray is designed as a general purpose library and hence tries to avoid
Expand Down Expand Up @@ -89,15 +90,18 @@ reasons:

Back in an interactive IPython session, we can use these properties:

.. ipython:: python
:suppress:
.. jupyter-execute::
:hide-code:

exec(open("examples/_code/accessor_example.py").read())

.. ipython:: python
.. jupyter-execute::

ds = xr.Dataset({"longitude": np.linspace(0, 10), "latitude": np.linspace(0, 20)})
ds.geo.center

.. jupyter-execute::

ds.geo.plot()

The intent here is that libraries that extend xarray could add such an accessor
Expand Down
65 changes: 45 additions & 20 deletions doc/internals/how-to-add-new-backend.rst
Original file line number Diff line number Diff line change
Expand Up @@ -221,21 +221,27 @@ performs the inverse transformation.

In the following an example on how to use the coders ``decode`` method:

.. ipython:: python
:suppress:
.. jupyter-execute::
:hide-code:

import xarray as xr
import numpy as np

.. ipython:: python
.. jupyter-execute::

var = xr.Variable(
dims=("x",), data=np.arange(10.0), attrs={"scale_factor": 10, "add_offset": 2}
)
var

.. jupyter-execute::

coder = xr.coding.variables.CFScaleOffsetCoder()
decoded_var = coder.decode(var)
decoded_var

.. jupyter-execute::

decoded_var.encoding

Some of the transformations can be common to more backends, so before
Expand Down Expand Up @@ -432,36 +438,55 @@ In the ``BASIC`` indexing support, numbers and slices are supported.

Example:

.. ipython::
:verbatim:
.. jupyter-input::

# () shall return the full array
backend_array._raw_indexing_method(())

.. jupyter-output::

array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]])

.. jupyter-input::

# shall support integers
backend_array._raw_indexing_method(1, 1)

In [1]: # () shall return the full array
...: backend_array._raw_indexing_method(())
Out[1]: array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]])
.. jupyter-output::

In [2]: # shall support integers
...: backend_array._raw_indexing_method(1, 1)
Out[2]: 5
5

In [3]: # shall support slices
...: backend_array._raw_indexing_method(slice(0, 3), slice(2, 4))
Out[3]: array([[2, 3], [6, 7], [10, 11]])
.. jupyter-input::

# shall support slices
backend_array._raw_indexing_method(slice(0, 3), slice(2, 4))

.. jupyter-output::

array([[2, 3], [6, 7], [10, 11]])

**OUTER**

The ``OUTER`` indexing shall support number, slices and in addition it shall
support also lists of integers. The outer indexing is equivalent to
combining multiple input list with ``itertools.product()``:

.. ipython::
:verbatim:
.. jupyter-input::

backend_array._raw_indexing_method([0, 1], [0, 1, 2])

In [1]: backend_array._raw_indexing_method([0, 1], [0, 1, 2])
Out[1]: array([[0, 1, 2], [4, 5, 6]])
.. jupyter-output::

array([[0, 1, 2], [4, 5, 6]])

.. jupyter-input::

# shall support integers
In [2]: backend_array._raw_indexing_method(1, 1)
Out[2]: 5
backend_array._raw_indexing_method(1, 1)

.. jupyter-output::

5


**OUTER_1VECTOR**
Expand Down
20 changes: 10 additions & 10 deletions doc/internals/internal-design.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
.. ipython:: python
:suppress:
.. jupyter-execute::
:hide-code:

import numpy as np
import pandas as pd
import xarray as xr

np.random.seed(123456)
np.set_printoptions(threshold=20)
np.set_printoptions(threshold=10, edgeitems=2)

.. _internal design:

Expand Down Expand Up @@ -150,7 +150,7 @@ Lazy Loading
If we open a ``Variable`` object from disk using :py:func:`~xarray.open_dataset` we can see that the actual values of
the array wrapped by the data variable are not displayed.

.. ipython:: python
.. jupyter-execute::

da = xr.tutorial.open_dataset("air_temperature")["air"]
var = da.variable
Expand All @@ -162,7 +162,7 @@ This is because the values have not yet been loaded.
If we look at the private attribute :py:meth:`~xarray.Variable._data` containing the underlying array object, we see
something interesting:

.. ipython:: python
.. jupyter-execute::

var._data

Expand All @@ -171,13 +171,13 @@ but provide important functionality.

Calling the public :py:attr:`~xarray.Variable.data` property loads the underlying array into memory.

.. ipython:: python
.. jupyter-execute::

var.data

This array is now cached, which we can see by accessing the private attribute again:

.. ipython:: python
.. jupyter-execute::

var._data

Expand All @@ -189,22 +189,22 @@ subsequent analysis, by deferring loading data until after indexing is performed

Let's open the data from disk again.

.. ipython:: python
.. jupyter-execute::

da = xr.tutorial.open_dataset("air_temperature")["air"]
var = da.variable

Now, notice how even after subsetting the data has does not get loaded:

.. ipython:: python
.. jupyter-execute::

var.isel(time=0)

The shape has changed, but the values are still not shown.

Looking at the private attribute again shows how this indexing information was propagated via the hidden lazy indexing classes:

.. ipython:: python
.. jupyter-execute::

var.isel(time=0)._data

Expand Down
Loading
Loading