Skip to content

Commit bb299a7

Browse files
committed
Bump to 5.5.1 and update CHANGELOG [run doc]
1 parent 10b773a commit bb299a7

File tree

19 files changed

+138
-105
lines changed

19 files changed

+138
-105
lines changed

CHANGELOG

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,36 @@
1+
========
2+
5.5.1
3+
========
4+
----------------
5+
New Features & Enhancements
6+
----------------
7+
* `BertForMultipleChoice` Transformer Added. Enhanced BERT’s capabilities to handle multiple-choice tasks such as standardized test questions and survey or quiz automation.
8+
* Integrated New Tasks and Documentation:
9+
* Added support and documentation for the following tasks:
10+
* Automatic Speech Recognition
11+
* Dependency Parsing
12+
* Image Captioning
13+
* Image Classification
14+
* Landing Page
15+
* Question Answering
16+
* Summarization
17+
* Table Question Answering
18+
* Text Classification
19+
* Text Generation
20+
* Text Preprocessing
21+
* Token Classification
22+
* Translation
23+
* Zero-Shot Classification
24+
* Zero-Shot Image Classification
25+
* `PromptAssembler` Annotator Introduced. Introduced a new annotator that constructs prompts for LLMs using a chat template and a sequence of messages. Accepts an array of tuples with roles (“system”, “user”, “assistant”) and message texts. Utilizes llama.cpp as a backend for template parsing, supporting basic template applications.
26+
27+
----------------
28+
Bug Fixes
29+
----------------
30+
* Resolved Pretrained Model Loading Issue on DBFS Systems.
31+
* Fixed a bug where pretrained models were not found when running AutoGGUF model pipelines on Databricks due to incorrect path handling of gguf files.
32+
33+
134
========
235
5.5.0
336
========

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ $ java -version
6363
$ conda create -n sparknlp python=3.7 -y
6464
$ conda activate sparknlp
6565
# spark-nlp by default is based on pyspark 3.x
66-
$ pip install spark-nlp==5.5.0 pyspark==3.3.1
66+
$ pip install spark-nlp==5.5.1 pyspark==3.3.1
6767
```
6868

6969
In Python console or Jupyter `Python3` kernel:
@@ -129,7 +129,7 @@ For a quick example of using pipelines and models take a look at our official [d
129129

130130
### Apache Spark Support
131131

132-
Spark NLP *5.5.0* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x
132+
Spark NLP *5.5.1* has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x
133133

134134
| Spark NLP | Apache Spark 3.5.x | Apache Spark 3.4.x | Apache Spark 3.3.x | Apache Spark 3.2.x | Apache Spark 3.1.x | Apache Spark 3.0.x | Apache Spark 2.4.x | Apache Spark 2.3.x |
135135
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
@@ -157,7 +157,7 @@ Find out more about 4.x `SparkNLP` versions in our official [documentation](http
157157

158158
### Databricks Support
159159

160-
Spark NLP 5.5.0 has been tested and is compatible with the following runtimes:
160+
Spark NLP 5.5.1 has been tested and is compatible with the following runtimes:
161161

162162
| **CPU** | **GPU** |
163163
|--------------------|--------------------|
@@ -174,7 +174,7 @@ We are compatible with older runtimes. For a full list check databricks support
174174

175175
### EMR Support
176176

177-
Spark NLP 5.5.0 has been tested and is compatible with the following EMR releases:
177+
Spark NLP 5.5.1 has been tested and is compatible with the following EMR releases:
178178

179179
| **EMR Release** |
180180
|--------------------|
@@ -205,7 +205,7 @@ deployed to Maven central. To add any of our packages as a dependency in your ap
205205
from our official documentation.
206206

207207
If you are interested, there is a simple SBT project for Spark NLP to guide you on how to use it in your
208-
projects [Spark NLP SBT S5.5.0r](https://github.com/maziyarpanahi/spark-nlp-starter)
208+
projects [Spark NLP SBT S5.5.1r](https://github.com/maziyarpanahi/spark-nlp-starter)
209209

210210
### Python
211211

@@ -250,7 +250,7 @@ In Spark NLP we can define S3 locations to:
250250

251251
Please check [these instructions](https://sparknlp.org/docs/en/install#s3-integration) from our official documentation.
252252

253-
## Document5.5.0
253+
## Document5.5.1
254254

255255
### Examples
256256

@@ -283,7 +283,7 @@ the Spark NLP library:
283283
keywords = {Spark, Natural language processing, Deep learning, Tensorflow, Cluster},
284284
abstract = {Spark NLP is a Natural Language Processing (NLP) library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment. Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+ languages. It supports nearly all the NLP tasks and modules that can be used seamlessly in a cluster. Downloaded more than 2.7 million times and experiencing 9x growth since January 2020, Spark NLP is used by 54% of healthcare organizations as the world’s most widely used NLP library in the enterprise.}
285285
}
286-
}5.5.0
286+
}5.5.1
287287
```
288288

289289
## Community support

build.sbt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ name := getPackageName(is_silicon, is_gpu, is_aarch64)
66

77
organization := "com.johnsnowlabs.nlp"
88

9-
version := "5.5.0"
9+
version := "5.5.1"
1010

1111
(ThisBuild / scalaVersion) := scalaVer
1212

docs/_layouts/landing.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ <h3 class="grey h3_title">{{ _section.title }}</h3>
201201
<div class="highlight-box">
202202
{% highlight bash %}
203203
# Using PyPI
204-
$ pip install spark-nlp==5.5.0
204+
$ pip install spark-nlp==5.5.1
205205

206206
# Using Anaconda/Conda
207207
$ conda install -c johnsnowlabs spark-nlp

docs/en/advanced_settings.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ spark = SparkSession.builder
5252
.config("spark.kryoserializer.buffer.max", "2000m")
5353
.config("spark.jsl.settings.pretrained.cache_folder", "sample_data/pretrained")
5454
.config("spark.jsl.settings.storage.cluster_tmp_dir", "sample_data/storage")
55-
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.0")
55+
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.1")
5656
.getOrCreate()
5757
```
5858

@@ -66,7 +66,7 @@ spark-shell \
6666
--conf spark.kryoserializer.buffer.max=2000M \
6767
--conf spark.jsl.settings.pretrained.cache_folder="sample_data/pretrained" \
6868
--conf spark.jsl.settings.storage.cluster_tmp_dir="sample_data/storage" \
69-
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.0
69+
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.1
7070
```
7171

7272
**pyspark:**
@@ -79,7 +79,7 @@ pyspark \
7979
--conf spark.kryoserializer.buffer.max=2000M \
8080
--conf spark.jsl.settings.pretrained.cache_folder="sample_data/pretrained" \
8181
--conf spark.jsl.settings.storage.cluster_tmp_dir="sample_data/storage" \
82-
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.0
82+
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.1
8383
```
8484

8585
**Databricks:**

docs/en/concepts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ $ java -version
6666
$ conda create -n sparknlp python=3.7 -y
6767
$ conda activate sparknlp
6868
# spark-nlp by default is based on pyspark 3.x
69-
$ pip install spark-nlp==5.5.0 pyspark==3.3.1 jupyter
69+
$ pip install spark-nlp==5.5.1 pyspark==3.3.1 jupyter
7070
$ jupyter notebook
7171
```
7272

docs/en/examples.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ $ java -version
1818
# should be Java 8 (Oracle or OpenJDK)
1919
$ conda create -n sparknlp python=3.7 -y
2020
$ conda activate sparknlp
21-
$ pip install spark-nlp==5.5.0 pyspark==3.3.1
21+
$ pip install spark-nlp==5.5.1 pyspark==3.3.1
2222
```
2323

2424
</div><div class="h3-box" markdown="1">
@@ -40,7 +40,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
4040
# -p is for pyspark
4141
# -s is for spark-nlp
4242
# by default they are set to the latest
43-
!bash colab.sh -p 3.2.3 -s 5.5.0
43+
!bash colab.sh -p 3.2.3 -s 5.5.1
4444
```
4545

4646
[Spark NLP quick start on Google Colab](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp/blob/master/examples/python/quick_start_google_colab.ipynb) is a live demo on Google Colab that performs named entity recognitions and sentiment analysis by using Spark NLP pretrained pipelines.

docs/en/hardware_acceleration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Since the new Transformer models such as BERT for Word and Sentence embeddings a
5050
| DeBERTa Large | +477%(5.8x) |
5151
| Longformer Base | +52%(1.5x) |
5252

53-
Spark NLP 5.5.0 is built with TensorFlow 2.7.1 and the following NVIDIA® software are only required for GPU support:
53+
Spark NLP 5.5.1 is built with TensorFlow 2.7.1 and the following NVIDIA® software are only required for GPU support:
5454

5555
- NVIDIA® GPU drivers version 450.80.02 or higher
5656
- CUDA® Toolkit 11.2

0 commit comments

Comments
 (0)