diff --git a/README-development.md b/README-development.md index fdecd7596..dd8b5d339 100644 --- a/README-development.md +++ b/README-development.md @@ -4,7 +4,7 @@ The Oracle Accelerated Data Science (ADS) SDK used by data scientists and analysts for data exploration and experimental machine learning to democratize machine learning and analytics by providing easy-to-use, -performant, and user friendly tools that +performant, and user-friendly tools that brings together the best of data science practices. The ADS SDK helps you connect to different data sources, perform exploratory data analysis, @@ -176,7 +176,7 @@ pip install -r test-requirements.txt ``` ### Step 2: Create local .env files -Running the local JuypterLab server requires setting OCI authentication, proxy, and OCI namespace parameters. Adapt this .env file with your specific OCI profile and OCIDs to set these variables. +Running the local JupyterLab server requires setting OCI authentication, proxy, and OCI namespace parameters. Adapt this .env file with your specific OCI profile and OCIDs to set these variables. ``` CONDA_BUCKET_NS="your_conda_bucket" diff --git a/docs/source/user_guide/configuration/configuration.rst b/docs/source/user_guide/configuration/configuration.rst index e72deb398..be238287a 100644 --- a/docs/source/user_guide/configuration/configuration.rst +++ b/docs/source/user_guide/configuration/configuration.rst @@ -296,7 +296,7 @@ encryption keys. Master encryption keys can be generated internally by the Vault service or imported to the service from an external source. Once a master -encryption key has been created, the Oracle Cloud Infrastruture API can +encryption key has been created, the Oracle Cloud Infrastructure API can be used to generate data encryption keys that the Vault service returns to you. by default, a wrapping key is included with each vault. A wrapping key is a 4096-bit asymmetric encryption key pair based on the @@ -673,7 +673,7 @@ prints it. This shows that the password was actually updated. wait_for_states=[oci.vault.models.Secret.LIFECYCLE_STATE_ACTIVE]).data # The secret OCID does not change. - print("Orginal Secret OCID: {}".format(secret_id)) + print("Original Secret OCID: {}".format(secret_id)) print("Updated Secret OCID: {}".format(secret_update.id)) ### Read a secret's value. @@ -685,7 +685,7 @@ prints it. This shows that the password was actually updated. .. parsed-literal:: - Orginal Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q + Original Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q Updated Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q {'database': 'datamart', 'username': 'admin', 'password': 'UpdatedPassword'} diff --git a/docs/source/user_guide/configuration/vault.rst b/docs/source/user_guide/configuration/vault.rst index 3b21518e0..71549bc01 100644 --- a/docs/source/user_guide/configuration/vault.rst +++ b/docs/source/user_guide/configuration/vault.rst @@ -239,7 +239,7 @@ also retrieves the updated secret, converts it into a dictionary, and prints it. wait_for_states=[oci.vault.models.Secret.LIFECYCLE_STATE_ACTIVE]).data # The secret OCID does not change. - print("Orginal Secret OCID: {}".format(secret_id)) + print("Original Secret OCID: {}".format(secret_id)) print("Updated Secret OCID: {}".format(secret_update.id)) ### Read a secret's value. @@ -251,7 +251,7 @@ also retrieves the updated secret, converts it into a dictionary, and prints it. .. parsed-literal:: - Orginal Secret OCID: ocid1.vaultsecret.. + Original Secret OCID: ocid1.vaultsecret.. Updated Secret OCID: ocid1.vaultsecret.. {'database': 'datamart', 'username': 'admin', 'password': 'UpdatedPassword'} diff --git a/docs/source/user_guide/data_flow/dataflow.rst b/docs/source/user_guide/data_flow/dataflow.rst index 0ba868e49..6861f7f14 100644 --- a/docs/source/user_guide/data_flow/dataflow.rst +++ b/docs/source/user_guide/data_flow/dataflow.rst @@ -63,7 +63,7 @@ In the preparation stage, you prepare the configuration object necessary to crea * ``pyspark_file_path``: The local path to your ``PySpark`` script. * ``script_bucket``: The bucket used to read/write the ``PySpark`` script in Object Storage. -ADS checks that the bucket exists, and that you can write to it from your notebook sesssion. Optionally, you can change values for these parameters: +ADS checks that the bucket exists, and that you can write to it from your notebook session. Optionally, you can change values for these parameters: * ``compartment_id``: The OCID of the compartment to create a Data Flow application. If it's not provided, the same compartment as your dataflow object is used. * ``driver_shape``: The driver shape used to create the application. The default value is ``"VM.Standard2.4"``. diff --git a/docs/source/user_guide/data_flow/legacy_dataflow.rst b/docs/source/user_guide/data_flow/legacy_dataflow.rst index eb6b3c8e6..0521deddf 100644 --- a/docs/source/user_guide/data_flow/legacy_dataflow.rst +++ b/docs/source/user_guide/data_flow/legacy_dataflow.rst @@ -68,7 +68,7 @@ In the preparation stage, you prepare the configuration object necessary to crea * ``pyspark_file_path``: The local path to your ``PySpark`` script. * ``script_bucket``: The bucket used to read/write the ``PySpark`` script in Object Storage. -ADS checks that the bucket exists, and that you can write to it from your notebook sesssion. Optionally, you can change values for these parameters: +ADS checks that the bucket exists, and that you can write to it from your notebook session. Optionally, you can change values for these parameters: * ``compartment_id``: The OCID of the compartment to create a application. If it's not provided, the same compartment as your dataflow object is used. * ``driver_shape``: The driver shape used to create the application. The default value is ``"VM.Standard2.4"``. diff --git a/docs/source/user_guide/operators/anomaly_detection_operator/use_cases.rst b/docs/source/user_guide/operators/anomaly_detection_operator/use_cases.rst index eaaa0199d..d54555a26 100644 --- a/docs/source/user_guide/operators/anomaly_detection_operator/use_cases.rst +++ b/docs/source/user_guide/operators/anomaly_detection_operator/use_cases.rst @@ -14,7 +14,7 @@ As a low-code extensible framework, operators enable a wide range of use cases. **Which Model is Right for You?** * Autots is a very comprehensive framework for time series data, winning the M6 benchmark. Parameters can be sent directly to AutoTS' AnomalyDetector class through the ``model_kwargs`` section of the yaml file. -* AutoMLX is a propreitary modeling framework developed by Oracle's Labs team and distributed through OCI Data Science. Parameters can be sent directly to AutoMLX's AnomalyDetector class through the ``model_kwargs`` section of the yaml file. +* AutoMLX is a proprietary modeling framework developed by Oracle's Labs team and distributed through OCI Data Science. Parameters can be sent directly to AutoMLX's AnomalyDetector class through the ``model_kwargs`` section of the yaml file. * Together these 2 frameworks train and tune more than 25 models, and deliver the est results. @@ -39,9 +39,9 @@ As a low-code extensible framework, operators enable a wide range of use cases. **Feature Engineering** -* The Operator will perform most feature engineering on your behalf, such as infering holidays, day of week, +* The Operator will perform most feature engineering on your behalf, such as inferring holidays, day of week, **Latency** -* The Operator is effectively a container distributed through the OCI Data Science platform. When deployed through Jobs or Model Deployment, customers can scale up the compute shape, memory size, and load balancer to make the prediciton progressively faster. Please consult an OCI Data Science Platform expert for more specifc advice. +* The Operator is effectively a container distributed through the OCI Data Science platform. When deployed through Jobs or Model Deployment, customers can scale up the compute shape, memory size, and load balancer to make the prediction progressively faster. Please consult an OCI Data Science Platform expert for more specific advice. diff --git a/docs/source/user_guide/quick_start/quick_start.rst b/docs/source/user_guide/quick_start/quick_start.rst index e1ac2f8a7..30755db78 100644 --- a/docs/source/user_guide/quick_start/quick_start.rst +++ b/docs/source/user_guide/quick_start/quick_start.rst @@ -10,5 +10,5 @@ Quick Start * :doc:`Evaluate Trained Models<../model_training/model_evaluation/quick_start>` * :doc:`Register, Manage, and Deploy Models<../model_registration/quick_start>` * :doc:`Store and Retrieve your data source credentials<../secrets/quick_start>` -* :doc:`Conect to existing OCI Big Data Service<../big_data_service/quick_start>` +* :doc:`Connect to existing OCI Big Data Service<../big_data_service/quick_start>` diff --git a/tests/unitary/with_extras/model/index.json b/tests/unitary/with_extras/model/index.json index 82f8c7062..013caa9e6 100644 --- a/tests/unitary/with_extras/model/index.json +++ b/tests/unitary/with_extras/model/index.json @@ -289,7 +289,7 @@ { "arch_type": "CPU", "create_date": "Sat, Feb 12, 2022, 05:04:46 UTC", - "description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the notebook example **getting-started.ipynb** in the **Notebook Examples launcher button**.\n", + "description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the notebook example **getting-started.ipynb** in the **Notebook Examples launcher button**.\n", "libraries": [ "onnx (v1.10.2)", "onnxconverter-common (v1.9.0)", @@ -315,7 +315,7 @@ { "arch_type": "CPU", "create_date": "Mon, Jun 06, 2022, 20:51:19 UTC", - "description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n", + "description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n", "libraries": [ "onnx (v1.10.2)", "onnxconverter-common (v1.9.0)", @@ -341,7 +341,7 @@ { "arch_type": "CPU", "create_date": "Mon, Jun 06, 2022, 20:52:30 UTC", - "description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n", + "description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n", "libraries": [ "onnx (v1.10.2)", "onnxconverter-common (v1.9.0)",