You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -58,27 +59,28 @@ It simplifies and accelerates your time-to-value in building performant deep lea
58
59
Available via API and hosted in the cloud, the SparseZoo contains both baseline and models sparsified to different degrees of inference performance vs. baseline loss recovery.
59
60
Recipe-driven approaches built around sparsification algorithms allow you to take the models as given, transfer-learn from the models onto private datasets, or transfer the recipes to your architectures.
60
61
61
-
This repository contains the Python API code to handle the connection and authentication to the cloud.
62
+
The [GitHub repository](https://github.com/neuralmagic/sparsezoo) contains the Python API code to handle the connection and authentication to the cloud.
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
67
-
Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308).
68
-
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
69
-
For example, pruning plus quantization can give noticeable improvements in performance while recovering to nearly the same baseline accuracy.
The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches. Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
71
+
A number of pre-trained baseline and sparsified models across domains and sub domains are available and constantly being added.
72
+
For an up to date list, please consult the [available models listing](https://github.com/neuralmagic/sparsezoo/blob/main/docs/source/models.md).
72
73
73
-
- Download a sparsification recipe and sparsified model from the [SparseZoo](https://github.com/neuralmagic/sparsezoo).
74
-
- Alternatively, create a recipe for your model using [Sparsify](https://github.com/neuralmagic/sparsify).
75
-
- Apply your recipe with only a few lines of code using [SparseML](https://github.com/neuralmagic/sparseml).
76
-
- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the [DeepSparse Engine](https://github.com/neuralmagic/deepsparse).
74
+
## Installation
77
75
76
+
This repository is tested on Python 3.6+, and Linux/Debian systems.
77
+
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep your system in order.
For a more in-depth read, check out [SparseZoo documentation](https://docs.neuralmagic.com/sparsezoo/).
217
219
218
-
## Installation
219
-
220
-
This repository is tested on Python 3.6+, and Linux/Debian systems.
221
-
It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep your system in order.
222
-
223
-
Install with pip using:
224
-
225
-
```bash
226
-
pip install sparsezoo
227
-
```
228
-
229
-
Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparsezoo/blob/main/scripts/) or [notebooks](https://github.com/neuralmagic/sparsezoo/blob/main/notebooks/)
230
-
clone the repository and install any additional dependencies as required.
220
+
## Resources
231
221
232
-
##Available Models and Recipes
222
+
### Learning More
233
223
234
-
A number of pre-trained baseline and sparsified models across domains and sub domains are available and constantly being added.
235
-
For an up to date list, please consult the [available models listing](https://github.com/neuralmagic/sparsezoo/blob/main/docs/source/models.md).
We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparsezoo/blob/main/CONTRIBUTING.md).
Additionally, more information can be found via [GitHub Releases.](https://github.com/neuralmagic/sparsezoo/releases)
250
235
251
-
For user help or questions about SparseZoo, sign up or log in: **Deep Sparse Community**[Discourse Forum](https://discuss.neuralmagic.com/) and/or [Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). We are growing the community member by member and happy to see you there.
236
+
### License
252
237
253
-
You can get the latest news, webinar and event invites, research papers, and other ML Performance tidbits by [subscribing](https://neuralmagic.com/subscribe/) to the Neural Magic community.
238
+
The project is licensed under the [Apache License Version 2.0](https://github.com/neuralmagic/sparsezoo/blob/main/LICENSE).
254
239
255
-
For more general questions about Neural Magic, please email us at [[email protected]](mailto:[email protected]) or fill out this [form](http://neuralmagic.com/contact/).
240
+
## Community
256
241
257
-
##License
242
+
### Contribute
258
243
259
-
The project is licensed under the [Apache License Version 2.0](https://github.com/neuralmagic/sparsezoo/blob/main/LICENSE).
244
+
We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparsezoo/blob/main/CONTRIBUTING.md).
260
245
261
-
##Release History
246
+
### Join
262
247
263
-
Official builds are hosted on PyPI
248
+
For user help or questions about SparseZoo, sign up or log in: **Deep Sparse Community**[Discourse Forum](https://discuss.neuralmagic.com/) and/or [Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
249
+
We are growing the community member by member and happy to see you there.
You can get the latest news, webinar and event invites, research papers, and other ML Performance tidbits by [subscribing](https://neuralmagic.com/subscribe/) to the Neural Magic community.
267
252
268
-
Additionally, more information can be found via [GitHub Releases.](https://github.com/neuralmagic/sparsezoo/releases)
253
+
For more general questions about Neural Magic, please fill out this [form](http://neuralmagic.com/contact/).
SparseZoo is a constantly-growing repository of highly sparse and sparse-quantized models with matching sparsification recipes for neural networks.
52
-
It simplifies and accelerates your time-to-value in building performant deep learning models with a collection of inference-optimized models and recipes to prototype from.
53
-
54
-
Available via API and hosted in the cloud, the SparseZoo contains both baseline and models optimized to different degrees of inference performance vs. baseline loss recovery.
55
-
Recipe-driven approaches built around sparsification algorithms allow you to take the models as given, transfer-learn from the models onto private datasets, or transfer the recipes to your architectures.
56
-
57
-
`This repository <https://github.com/neuralmagic/sparsezoo>`_ contains the Python API code to handle the connection and authentication to the cloud.
58
-
59
-
Sparsification
60
-
==============
61
-
62
-
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
63
-
Techniques for sparsification are all encompassing including everything from inducing sparsity using `pruning <https://neuralmagic.com/blog/pruning-overview/>`_ and `quantization <https://arxiv.org/abs/1609.07061>`_ to enabling naturally occurring sparsity using `activation sparsity <http://proceedings.mlr.press/v119/kurtz20a.html>`_ or `winograd/FFT <https://arxiv.org/abs/1509.09308>`_.
64
-
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
65
-
For example, pruning plus quantization can give noticeable improvements in performance while recovering to nearly the same baseline accuracy.
66
-
67
-
The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
68
-
Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
69
-
70
-
- Download a sparsification recipe and sparsified model from the `SparseZoo <https://github.com/neuralmagic/sparsezoo>`_.
71
-
- Alternatively, create a recipe for your model using `Sparsify <https://github.com/neuralmagic/sparsify>`_.
72
-
- Apply your recipe with only a few lines of code using `SparseML <https://github.com/neuralmagic/sparseml>`_.
73
-
- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the `DeepSparse Engine <https://github.com/neuralmagic/deepsparse>`_.
0 commit comments