Skip to content

misc: fix spelling in json,md,rst files #1146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 7, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README-development.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
The Oracle Accelerated Data Science (ADS) SDK used by data scientists and analysts for
data exploration and experimental machine learning to democratize machine learning and
analytics by providing easy-to-use,
performant, and user friendly tools that
performant, and user-friendly tools that
brings together the best of data science practices.

The ADS SDK helps you connect to different data sources, perform exploratory data analysis,
Expand Down Expand Up @@ -176,7 +176,7 @@ pip install -r test-requirements.txt
```

### Step 2: Create local .env files
Running the local JuypterLab server requires setting OCI authentication, proxy, and OCI namespace parameters. Adapt this .env file with your specific OCI profile and OCIDs to set these variables.
Running the local JupyterLab server requires setting OCI authentication, proxy, and OCI namespace parameters. Adapt this .env file with your specific OCI profile and OCIDs to set these variables.

```
CONDA_BUCKET_NS="your_conda_bucket"
Expand Down
6 changes: 3 additions & 3 deletions docs/source/user_guide/configuration/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ encryption keys.

Master encryption keys can be generated internally by the Vault service
or imported to the service from an external source. Once a master
encryption key has been created, the Oracle Cloud Infrastruture API can
encryption key has been created, the Oracle Cloud Infrastructure API can
be used to generate data encryption keys that the Vault service returns
to you. by default, a wrapping key is included with each vault. A
wrapping key is a 4096-bit asymmetric encryption key pair based on the
Expand Down Expand Up @@ -673,7 +673,7 @@ prints it. This shows that the password was actually updated.
wait_for_states=[oci.vault.models.Secret.LIFECYCLE_STATE_ACTIVE]).data

# The secret OCID does not change.
print("Orginal Secret OCID: {}".format(secret_id))
print("Original Secret OCID: {}".format(secret_id))
print("Updated Secret OCID: {}".format(secret_update.id))

### Read a secret's value.
Expand All @@ -685,7 +685,7 @@ prints it. This shows that the password was actually updated.

.. parsed-literal::

Orginal Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q
Original Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q
Updated Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q
{'database': 'datamart', 'username': 'admin', 'password': 'UpdatedPassword'}

Expand Down
4 changes: 2 additions & 2 deletions docs/source/user_guide/configuration/vault.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ also retrieves the updated secret, converts it into a dictionary, and prints it.
wait_for_states=[oci.vault.models.Secret.LIFECYCLE_STATE_ACTIVE]).data

# The secret OCID does not change.
print("Orginal Secret OCID: {}".format(secret_id))
print("Original Secret OCID: {}".format(secret_id))
print("Updated Secret OCID: {}".format(secret_update.id))

### Read a secret's value.
Expand All @@ -251,7 +251,7 @@ also retrieves the updated secret, converts it into a dictionary, and prints it.

.. parsed-literal::

Orginal Secret OCID: ocid1.vaultsecret..<unique_ID>
Original Secret OCID: ocid1.vaultsecret..<unique_ID>
Updated Secret OCID: ocid1.vaultsecret..<unique_ID>
{'database': 'datamart', 'username': 'admin', 'password': 'UpdatedPassword'}

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/data_flow/dataflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ In the preparation stage, you prepare the configuration object necessary to crea
* ``pyspark_file_path``: The local path to your ``PySpark`` script.
* ``script_bucket``: The bucket used to read/write the ``PySpark`` script in Object Storage.

ADS checks that the bucket exists, and that you can write to it from your notebook sesssion. Optionally, you can change values for these parameters:
ADS checks that the bucket exists, and that you can write to it from your notebook session. Optionally, you can change values for these parameters:

* ``compartment_id``: The OCID of the compartment to create a Data Flow application. If it's not provided, the same compartment as your dataflow object is used.
* ``driver_shape``: The driver shape used to create the application. The default value is ``"VM.Standard2.4"``.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/data_flow/legacy_dataflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ In the preparation stage, you prepare the configuration object necessary to crea
* ``pyspark_file_path``: The local path to your ``PySpark`` script.
* ``script_bucket``: The bucket used to read/write the ``PySpark`` script in Object Storage.

ADS checks that the bucket exists, and that you can write to it from your notebook sesssion. Optionally, you can change values for these parameters:
ADS checks that the bucket exists, and that you can write to it from your notebook session. Optionally, you can change values for these parameters:

* ``compartment_id``: The OCID of the compartment to create a application. If it's not provided, the same compartment as your dataflow object is used.
* ``driver_shape``: The driver shape used to create the application. The default value is ``"VM.Standard2.4"``.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ As a low-code extensible framework, operators enable a wide range of use cases.
**Which Model is Right for You?**

* Autots is a very comprehensive framework for time series data, winning the M6 benchmark. Parameters can be sent directly to AutoTS' AnomalyDetector class through the ``model_kwargs`` section of the yaml file.
* AutoMLX is a propreitary modeling framework developed by Oracle's Labs team and distributed through OCI Data Science. Parameters can be sent directly to AutoMLX's AnomalyDetector class through the ``model_kwargs`` section of the yaml file.
* AutoMLX is a proprietary modeling framework developed by Oracle's Labs team and distributed through OCI Data Science. Parameters can be sent directly to AutoMLX's AnomalyDetector class through the ``model_kwargs`` section of the yaml file.
* Together these 2 frameworks train and tune more than 25 models, and deliver the est results.


Expand All @@ -39,9 +39,9 @@ As a low-code extensible framework, operators enable a wide range of use cases.

**Feature Engineering**

* The Operator will perform most feature engineering on your behalf, such as infering holidays, day of week,
* The Operator will perform most feature engineering on your behalf, such as inferring holidays, day of week,


**Latency**

* The Operator is effectively a container distributed through the OCI Data Science platform. When deployed through Jobs or Model Deployment, customers can scale up the compute shape, memory size, and load balancer to make the prediciton progressively faster. Please consult an OCI Data Science Platform expert for more specifc advice.
* The Operator is effectively a container distributed through the OCI Data Science platform. When deployed through Jobs or Model Deployment, customers can scale up the compute shape, memory size, and load balancer to make the prediction progressively faster. Please consult an OCI Data Science Platform expert for more specific advice.
2 changes: 1 addition & 1 deletion docs/source/user_guide/quick_start/quick_start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ Quick Start
* :doc:`Evaluate Trained Models<../model_training/model_evaluation/quick_start>`
* :doc:`Register, Manage, and Deploy Models<../model_registration/quick_start>`
* :doc:`Store and Retrieve your data source credentials<../secrets/quick_start>`
* :doc:`Conect to existing OCI Big Data Service<../big_data_service/quick_start>`
* :doc:`Connect to existing OCI Big Data Service<../big_data_service/quick_start>`

6 changes: 3 additions & 3 deletions tests/unitary/with_extras/model/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@
{
"arch_type": "CPU",
"create_date": "Sat, Feb 12, 2022, 05:04:46 UTC",
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the notebook example **getting-started.ipynb** in the **Notebook Examples launcher button**.\n",
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the notebook example **getting-started.ipynb** in the **Notebook Examples launcher button**.\n",
"libraries": [
"onnx (v1.10.2)",
"onnxconverter-common (v1.9.0)",
Expand All @@ -315,7 +315,7 @@
{
"arch_type": "CPU",
"create_date": "Mon, Jun 06, 2022, 20:51:19 UTC",
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
"libraries": [
"onnx (v1.10.2)",
"onnxconverter-common (v1.9.0)",
Expand All @@ -341,7 +341,7 @@
{
"arch_type": "CPU",
"create_date": "Mon, Jun 06, 2022, 20:52:30 UTC",
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
"libraries": [
"onnx (v1.10.2)",
"onnxconverter-common (v1.9.0)",
Expand Down