diff --git a/docs/manage/content-sharing/admin-mode.md b/docs/manage/content-sharing/admin-mode.md index 25fe7dc745..9ee6bf268a 100644 --- a/docs/manage/content-sharing/admin-mode.md +++ b/docs/manage/content-sharing/admin-mode.md @@ -7,37 +7,37 @@ description: Admin mode allows you to control the content for your organization import useBaseUrl from '@docusaurus/useBaseUrl'; -As a Content Administrator, you can assume a super user role within Sumo. When you need to manage the content for your organization, you can choose the Content Administrator role which will turn off your personal content in the Library and allow you to see the entire Sumo file tree.  +As a content administrator with the [Manage Content](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability, you can assume a super user role within Sumo Logic. When you need to manage the content for your organization, you can choose the content administrator role which will turn off your personal content in the library and allow you to see the entire Sumo Logic file tree.  In this mode, you can migrate content from one location to another, as well as highlight important content in the Admin Recommended folder.  -## Switch to Admin Mode +## Switch to admin mode -As a Content Administrator,  you can switch to Admin mode at any time in order to move content from one folder to another for anyone in your organization. +As a content administrator,  you can switch to admin mode at any time in order to move content from one folder to another for anyone in your organization. -To switch to Admin Mode: +To switch to admin mode: 1. Go to the Library. 1. Select **View as:** > **Content Administrator.**
Admin Mode -You will now see the whole file tree for your organization, as well as the **Admin Recommended** folder. +You will now see the whole file tree for your organization, as well as the Admin Recommended folder. ## Move important content to Admin Recommended -Important content can be dashboards that help new users get started, or common searches that your organization needs often. You can draw attention to this content by putting it into **Admin Recommended**, which appears at the top of the Library in the Left-Nav. +Important content can be dashboards that help new users get started, or common searches that your organization needs often. You can draw attention to this content by putting it into Admin Recommended, which appears at the top of the Library in the left navigation bar. -For example, you can content share an Audit dashboard at the top of the Library on the Left-nav with a particular role such as Administrators and move it into Admin Recommended. All Sumo Administrators will be able to see it there, but any user without that role, will not see the dashboard. +For example, you can content share an audit dashboard at the top of the Library on the left navigation bar with a particular role such as Administrators and move it into Admin Recommended. All Sumo Logic users with the Administrators role will be able to see it there, but any user without that role, will not see the dashboard. Admin Recommended To add a dashboard or search to Admin Recommended: -1. Select the Library Tab from the UI. -1. Toggle to Content Administrator mode. -1. A note loads on the Left-nav that says **Viewing as Content Administrator**. This is to help you remember why your Personal folder doesn't appear. -1. Make sure you've [shared the search](/docs/manage/content-sharing), dashboard, or folder with the role or users that you want to be able to access it. +1. Select the **Library** tab from the UI. +1. Toggle to **Content Administrator** mode. +1. A note loads on the left navigation bar that says **Viewing as Content Administrator**. This is to help you remember why your Personal folder doesn't appear. +1. Make sure you've [shared](/docs/manage/content-sharing) the search, dashboard, or folder with the role or users that you want to be able to access it. 1. Select the options menu for the item you want to move, and choose **Move.** -1. From the Move dialog, choose the **Admin Recommended** folder and click **Move**. +1. From the **Move** dialog, choose the **Admin Recommended** folder and click **Move**. :::note Remember to switch out of Content Administrator viewing when you are done. @@ -45,4 +45,4 @@ Remember to switch out of Content Administrator viewing when you are done. ## Track content changes in your org -If you need to track what content has been shared in your organization, or recently changed by another Content Administrator, you can find dashboards to help you track that information in the [Audit App](/docs/integrations/sumo-apps/audit). +If you need to track what content has been shared in your organization, or recently changed by another content administrator, you can find dashboards to help you track that information in the [Sumo Logic Audit app](/docs/integrations/sumo-apps/audit). diff --git a/docs/manage/content-sharing/changing-alerts.md b/docs/manage/content-sharing/changing-alerts.md index 254cfeb459..96231d99d8 100644 --- a/docs/manage/content-sharing/changing-alerts.md +++ b/docs/manage/content-sharing/changing-alerts.md @@ -6,13 +6,13 @@ description: You can modify or cancel alerts that are no longer valuable to your import useBaseUrl from '@docusaurus/useBaseUrl'; -The ability to modify or turn off alerts created by another user is now possible with Content Sharing. Sometimes the need or frequency of a log alert changes while the creator is unavailable and with Content Sharing you can give that ability to another Role or user within your Organization. +The ability to modify or turn off alerts created by another user is possible with content sharing. Sometimes the need or frequency of a log alert changes while the creator is unavailable, and with content sharing you can give that ability to another role or user within your organization. -We strongly recommend sharing your scheduled searches with at least one Role or a user you trust to allow you more flexibility with alerts. +We strongly recommend sharing your scheduled searches with at least one role or a user you trust to allow you more flexibility with alerts. ## Edit an alert -If you or your role has Edit permissions on a scheduled search you can modify the frequency and type of alert as well as the query if you need to make any adjustments such as the threshold or timeslice. +If you or your role has edit permissions on a scheduled search you can modify the frequency and type of alert as well as the query if you need to make any adjustments such as the threshold or timeslice. :::note If you're using a search template with your saved search, you cannot modify the query from the alert. @@ -29,7 +29,7 @@ To edit an alert: ## Cancel alerts on a shared search -If you have Edit permissions on the shared search, you can stop recipients from receiving alerts by setting the run frequency to **Never**. We recommend doing this when a search is no longer relevant rather than deleting the search so that it can be available to you later if you need it. Deleting the shared search is possible, if you have Manage permissions, but does not allow you the ability to restore a scheduled search later if you need it. +If you have edit permissions on the shared search, you can stop recipients from receiving alerts by setting the run frequency to **Never**. We recommend doing this when a search is no longer relevant rather than deleting the search so that it can be available to you later if you need it. Deleting the shared search is possible if you have manage permissions, but does not allow you the ability to restore a scheduled search later if you need it. 1. Navigate to the scheduled search you want to edit, as described above in [Edit an alert](#edit-an-alert). 1. Select the edit icon in the library for the scheduled search.
Select the edit icon in the library diff --git a/docs/manage/content-sharing/content-sharing-faq.md b/docs/manage/content-sharing/content-sharing-faq.md index 48395e7eaf..eacb6deb67 100644 --- a/docs/manage/content-sharing/content-sharing-faq.md +++ b/docs/manage/content-sharing/content-sharing-faq.md @@ -5,63 +5,60 @@ sidebar_label: FAQ description: This FAQ answers your basic questions around content sharing. --- -Welcome to Content Sharing. We've provided this FAQ to answer your basic questions around the interface changes that come with Content Sharing. +This FAQ answers your basic questions about Sumo Logic content sharing. ## When I share something with someone, how will they see it? -When you share something directly with a user (or to their role) they will receive an email notification that they can click on to guide them to the item in Sumo. +When you share something directly with a user (or with their role) they will receive an email notification that they can click on to guide them to the item in Sumo Logic. * The object will also be available in their Library view. -* They can also look in **Recent** in the left-nav to see what has been recently shared. +* They can also look in **Recent** in the left navigation bar to see what has been recently shared. * The share dialog associated with the item will reflect who the item is shared with and what level of access they have. ## Can I delete someone else's alerts? -Yes, IF you have Edit permissions on that shared search. You need Edit permissions at a minimum on the shared search to make any changes to the associated alert. For details, see [Changing Alerts](changing-alerts.md). +Yes, if you have edit permissions on that shared search. You need edit permissions at a minimum on the shared search to make any changes to the associated alert. For details, see [Changing Other Alerts](changing-alerts.md). -## I shared something with my coworker but they can’t see it in their Library view? +## I shared something with my coworker, but why can't they see it in their Library view? -It is possible that you shared an item that is nested within a folder. The Library view is designed to roll-up to the highest level parent folder. Have them check their **Recently Shared with Me** dialog. Or, send your co-worker the name of the item and they can also search for it in the Library. +It is possible that you shared an item that is nested within a folder. The Library view is designed to roll up to the highest level parent folder. Have them check their **Recently Shared with Me** dialog. Or, send your co-worker the name of the item and they can also search for it in the Library. -## Can shared dashboards always run with viewer's role search filter instead of the creator's role search filter? +## Can shared dashboards always run with the viewer's role search filter instead of the creator's role search filter? Yes, for an individual dashboard, when you share the dashboard you can choose to share it with the "Viewer’s data access level", so that viewers will see it with their own role search filter. For more information, see [Set the Data Access Level for a Dashboard](/docs/dashboards/set-data-access-level). In addition, it is possible to set a security policy that ensures that all new dashboards will run with the viewer’s role search filter when shared. For more information, see [Data Access Level for Shared Dashboards](../security/data-access-level-shared-dashboards.md).  ## Can I share a folder with someone? -Yes, you can share a folder you manage with anyone or any role in your Org. Keep in mind that when you share a folder, that person will have access to all items within that folder as well as any nested sub-folders. Try to limit sharing at the folder level and grant permissions to sub-folders or individual items. +Yes, you can share a folder you manage with anyone or any role in your organization. Keep in mind that when you share a folder, that person will have access to all items within that folder as well as any nested sub-folders. Try to limit sharing at the folder level and grant permissions to sub-folders or individual items. ## What if I shared a folder with someone with edit access but then gave them view access on a specific dashboard in that folder? -We also allow the most permissive set of permissions. In this case, the highest permission the user has on the dashboard is edit - so that’s how they can access it. +We also allow the most permissive set of permissions. In this case, the highest permission the user has on the dashboard is edit, so that’s how they can access it. -## Can I share a folder with a role with edit permissions the limit items in folder from editing? +## Can I share a folder with a role with edit permissions that limit items in a folder from editing? No. We do not support the concept of negative permissions. Users will always get the highest level of permissions available to them on an item. We recommend a strategy of providing the lowest level of access (view) to the broad group of users and limiting higher level of access to only a trusted few. -## Can I control what objects a specific role can access using the Roles page? +## Can I control what objects a specific role can access using the [Roles](/docs/manage/users-roles/roles/create-manage-roles/) page? No. Access control is managed at the object level, in the Library. ## I want to create a hierarchy of folders for my company so that each team knows exactly where to put their content. How can I do this? -This can be achieved by a feature available to administrators called -**Admin View**. Users who are in a role that has the capability Manage -Content set, can see this view. +This can be achieved by a feature available to administrators called *Admin View*. Users who are in a role with the [Manage Content](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability can see this view. -* When in this view, Administrators have manage access on all objects in the org. -* They also have access to a special folder called **Admin Recommended**. Anything that is placed in this folder and shared out to a user or role, is displayed at the top of the library, in the Admin recommended section. +* When in this view, administrators have manage access on all objects in the org. +* They also have access to a special folder called Admin Recommended. Anything that is placed in this folder, and shared out to a user or role, is displayed at the top of the library in the Admin Recommended section. * Administrators can create a folder hierarchy within this view and share it out to the org with view permissions. Certain roles can have edit or manage access to the folder that is specific to the team, so they can move their content in. ## What is this Data Access setting that pops up whenever I try to edit a dashboard? -Dashboards run with the data access level of a particular user. We wanted to prevent users from making edits to a dashboard that would enable them to see more data than they were allowed to. When a user attempts to edit a query, we compare the editor’s data access to the current Run-as user of the dashboard. If the access level is lower, we ask the editor to change the Run-as user to themselves before they can save their change. +Dashboards run with the data access level of a particular user. We wanted to prevent users from making edits to a dashboard that would enable them to see more data than they were allowed to. When a user attempts to edit a query, we compare the editor’s data access to the current run-as user of the dashboard. If the access level is lower, we ask the editor to change the run-as user to themselves before they can save their change. -## I'm an Admin, how do I monitor content sharing activity in my org? +## As an admin, how do I monitor content sharing activity in my org? -All permission updates, move, copy and delete actions in the content library are audited. All actions performed by the user while in Admin -mode are also audited. The [Audit App](/docs/integrations/sumo-apps/audit) has been updated with several new dashboards that visualize this activity for you. +All permission updates, move, copy and delete actions in the content library are audited. All actions performed by the user while in admin mode are also audited. The [Sumo Logic Audit app](/docs/integrations/sumo-apps/audit) has been updated with several new dashboards that visualize this activity for you. diff --git a/docs/manage/content-sharing/index.md b/docs/manage/content-sharing/index.md index 469f05f8bd..d8fdadaa51 100644 --- a/docs/manage/content-sharing/index.md +++ b/docs/manage/content-sharing/index.md @@ -108,7 +108,7 @@ In this section, we'll introduce the following concepts:
icon

FAQ

-

Get to know the answers to basic questions around Content Sharing.

+

Get to know the answers to basic questions around content sharing.

diff --git a/docs/manage/data-archiving/archive-otel.md b/docs/manage/data-archiving/archive-otel.md index 8c6bb83ec5..476d3f7d72 100644 --- a/docs/manage/data-archiving/archive-otel.md +++ b/docs/manage/data-archiving/archive-otel.md @@ -1,12 +1,12 @@ --- id: archive-otel title: Archive Log Data to S3 using OpenTelemetry Collectors -description: Learn how to archive log data to Amazon S3 using OpenTelemetry Collectors and ingest it on demand. +description: Learn how to archive log data to Amazon S3 using OpenTelemetry collectors and ingest it on demand. --- import useBaseUrl from '@docusaurus/useBaseUrl'; -This article describes how to archive log data to Amazon S3 using OpenTelemetry Collectors. Archiving allows you to store log data cost-effectively in S3 and ingest it later on demand, while retaining full enrichment and searchability when the data is re-ingested. +This article describes how to archive log data to Amazon S3 using OpenTelemetry collectors. Archiving allows you to store log data cost-effectively in S3 and ingest it later on demand, while retaining full enrichment and searchability when the data is re-ingested. :::important Do not change the name or location of the archived files in your S3 bucket. Doing so will prevent proper ingestion later. @@ -14,15 +14,15 @@ Do not change the name or location of the archived files in your S3 bucket. Doin ## Overview -With the OpenTelemetry-based approach, log data is sent to S3 using an OpenTelemetry Collector pipeline: +With the OpenTelemetry-based approach, log data is sent to S3 using an OpenTelemetry collector pipeline: -**Sources** > **OpenTelemetry Collector** > **awss3exporter** > **Amazon S3** +**Sources** > **OpenTelemetry collector** > **awss3exporter** > **Amazon S3** For S3 archiving, we use: -- The `awss3exporter` component to upload data to S3. [Learn more](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/awss3exporter). +- The `awss3exporter` component to upload data to S3. [Learn more](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/awss3exporter/README.md). - The `sumo_ic marshaller`, which formats the archived files so they are compatible with Sumo Logic’s ingestion process. -### Why use OpenTelemetry Collector for archiving logs to S3 +### Why use OpenTelemetry collector for archiving logs to S3 Compared to legacy Installed Collector-based archiving, the OpenTelemetry approach provides: - Full control over metadata enrichment @@ -46,7 +46,7 @@ These attributes can be set statically in configuration, or populated dynamicall ## Archive format :::important -Only the `v2` archive format is supported when using OpenTelemetry Collector. The legacy `v1` format is deprecated for OpenTelemetry Collector and must not be used. [Learn more](/docs/manage/data-archiving/archive/#archive-format) +Only the `v2` archive format is supported when using OpenTelemetry collector. The legacy `v1` format is deprecated for OpenTelemetry collector and must not be used. [Learn more](/docs/manage/data-archiving/archive/#archive-format). ::: Archived files use the format: @@ -61,7 +61,7 @@ The identifier values do not need to be real IDs. Dummy values are allowed and i In many environments, the `collectorID` can be a dummy value. The `bladeID` (source template ID) is particularly more useful for identifying log types. ::: -Below is a sample OpenTelemetry Collector configuration that archives logs from files into S3 using the supported Sumo Logic archive format. +Below is a sample OpenTelemetry collector configuration that archives logs from files into S3 using the supported Sumo Logic archive format. ``` receivers: @@ -108,7 +108,7 @@ service: ## Ingestion filtering using path patterns -When configuring an AWS S3 Archive Source on a Hosted Collector, specify a file path pattern to control what gets ingested. +When configuring an AWS S3 archive source on a Hosted Collector, specify a file path pattern to control what gets ingested. For example, to ingest only Docker logs: @@ -133,80 +133,80 @@ Example: | hour1/minute37 | hour1/minute05 to hour1/minute30 | hour1/minute22 | | hour1/minute52 | hour1/minute05 to hour1/minute30 | hour1/minute37 | -## Ingest data from Archive +## Ingest data from archive -You can ingest a specific time range of data from your Archive at any time with an **AWS S3 Archive Source**. First, [create an AWS S3 Archive Source](#create-an-aws-s3-archivesource), then [create an ingestion job](#create-an-ingestion-job). +You can ingest a specific time range of data from your archive at any time with an AWS S3 archive source. First, [create an AWS S3 archive source](#create-an-aws-s3-archive-source), then [create an ingestion job](#create-an-ingestion-job). ### Rules * A maximum of 2 concurrent ingestion jobs is supported. If more jobs are needed contact your Sumo Logic account representative. * An ingestion job has a maximum time range of 12 hours. If a longer time range is needed, contact your Sumo Logic account representative. * Filenames or object key names must be in either of the following formats: - * Sumo Logic [Archive format](#archive-format) + * Sumo Logic [archive format](#archive-format) * `prefix/dt=YYYYMMDD/hour=HH/fileName.json.gz` -* If the logs from Archive do not have timestamps, they are only searchable by receipt time. -* If a Field is tagged to an archived log message and the ingesting Collector or Source has a different value for the Field, the field values already tagged to the archived log take precedence. -* If the Collector or Source that Archived the data is deleted, the ingesting Collector and Source metadata Fields are tagged to your data. +* If the logs from archive do not have timestamps, they are only searchable by receipt time. +* If a field is tagged to an archived log message and the ingesting collector or source has a different value for the field, the field values already tagged to the archived log take precedence. +* If the collector or source that archived the data is deleted, the ingesting collector and source metadata fields are tagged to your data. * You can create ingestion jobs for the same time range, however, jobs maintain a 10 day history of ingested data and any data resubmitted for ingestion within 10 days of its last ingestion will be automatically filtered so it's not ingested. -### Create an AWS S3 Archive Source +### Create an AWS S3 archive source :::note -You need the **Manage Collectors** role capability to create an AWS S3 Archive Source. +You need the [Manage Collectors](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability to create an AWS S3 archive source. ::: -An AWS S3 Archive Source allows you to ingest your Archived data. Configure it to access the AWS S3 bucket that has your Archived data. +An AWS S3 archive source allows you to ingest your archived data. Configure it to access the AWS S3 bucket that has your archived data. :::note -To use JSON to create an AWS S3 Archive Source reference our AWS Log Source parameters and use `AwsS3ArchiveBucket` as the value for `contentType`. +To use JSON to create an AWS S3 archive source, reference our AWS Log source parameters and use `AwsS3ArchiveBucket` as the value for `contentType`. ::: 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Collection**. You can also click the **Go To...** menu at the top of the screen and select **Collection**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Collection**. 1. On the **Collectors** page, click **Add Source** next to a Hosted Collector, either an existing Hosted Collector or one you have created for this purpose. 1. Select **AWS S3 Archive**.
Archive icon -1. Enter a name for the new Source. A description is optional. +1. Enter a name for the new source. A description is optional. 1. Select an **S3 region** or keep the default value of **Others**. The S3 region must match the appropriate S3 bucket created in your Amazon account. 1. For **Bucket Name**, enter the exact name of your organization's S3 bucket. Be sure to double-check the name as it appears in AWS. -1. For **Path Expression**, enter the wildcard pattern that matches the Archive files you'd like to collect. The pattern: +1. For **Path Expression**, enter the wildcard pattern that matches the archive files you'd like to collect. The pattern: * Can use one wildcard (\*). * Can specify a [prefix](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-keys) so only certain files from your bucket are ingested. For example, if your filename is `prefix/dt=/hour=/minute=///v2/.txt.gzip`, you could use `prefix*` to only ingest from those matching files. * Cannot use a leading forward slash. * Cannot have the S3 bucket name. -1. For **Source Category**, enter any string to tag to the data collected from this Source. Category metadata is stored in a searchable field called `_sourceCategory`. -1. **Fields**. Click the **+Add Field** link to add custom metadata Fields. Define the fields you want to associate, each field needs a name (key) and value. +1. For **Source Category**, enter any string to tag to the data collected from this source. Category metadata is stored in a searchable field called `_sourceCategory`. +1. **Fields**. Click the **+Add Field** link to add custom metadata fields. Define the fields you want to associate, each field needs a name (key) and value. :::note - Fields specified on an AWS S3 Archive Source take precedence if the archived data has the same fields. + Fields specified on an AWS S3 archive source take precedence if the archived data has the same fields. ::: - * green check circle.png A green circle with a check mark is shown when the field exists and is enabled in the Fields table schema. - * orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist, or is disabled, in the Fields table schema. In this case, an option to automatically add or enable the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo that does not exist in the Fields schema or is disabled it is ignored, known as dropped. -1. For **AWS Access** you have two **Access Method** options. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred, this was completed in the prerequisite step Grant Sumo Logic access to an AWS Product. + * green check circle.png A green circle with a check mark is shown when the field exists and is enabled in the fields table schema. + * orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist, or is disabled, in the fields table schema. In this case, an option to automatically add or enable the nonexistent fields to the fields table schema is provided. If a field is sent to Sumo Logic that does not exist in the fields schema or is disabled it is ignored, known as dropped. +1. For **AWS Access** you have two **Access Method** options. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred, this was completed in the prerequisite step [Grant Access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product/). * For **Role-based access**, enter the Role ARN that was provided by AWS after creating the role.  - * For **Key access** enter the **Access Key ID **and** Secret Access Key.** See [AWS Access Key ID](https://docs.aws.amazon.com/STS/latest/UsingSTS/UsingTokens.html#RequestWithSTS) and [AWS Secret Access Key](https://aws.amazon.com/iam/) for details. -1. Create any Processing Rules you'd like for the AWS Source. -1. When you are finished configuring the Source, click **Save**. + * For **Key access** enter the **Access Key ID **and** Secret Access Key.** See [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for details. +1. Create any processing rules you'd like for the AWS source. +1. When you are finished configuring the source, click **Save**. ## Archive page :::important -You need the Manage Collectors or View Collectors [role capability](/docs/manage/users-roles/roles/role-capabilities/) to manage or view an archive. +You need the [Manage Collectors or View Collectors](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability to manage or view an archive. ::: -The Archive page provides a table of all the existing [AWS S3 Archive Sources](#create-an-aws-s3-archivesource) in your account and ingestion jobs. +The archive page provides a table of all the existing AWS S3 archive sources in your account and ingestion jobs. -[**New UI**](/docs/get-started/sumo-logic-ui/). To access the Archive page, in the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Archive**. You can also click the **Go To...** menu at the top of the screen and select **Archive**. +[**New UI**](/docs/get-started/sumo-logic-ui/). To access the archive page, in the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Archive**. You can also click the **Go To...** menu at the top of the screen and select **Archive**. -[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the Archive page, in the main Sumo Logic menu select **Manage Data > Collection > Archive**. +[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the archive page, in the main Sumo Logic menu select **Manage Data > Collection > Archive**. Archive page ### Details pane -Click on a table row to view the Source details. This includes: +Click on a table row to view the source details. This includes: * **Name** * **Description** * **AWS S3 bucket** -* All **Ingestion jobs** that are and have been created on the Source. +* All **Ingestion jobs** that are and have been created on the source. * Each ingestion job shows the name, time window, and volume of data processed by the job. Click the icon Open in search icon to the right of the job name to start a search against the data that was ingested by the job. * Hover your mouse over the information icon to view who created the job and when.
Archive details pane @@ -216,14 +216,14 @@ Click on a table row to view the Source details. This includes: A maximum of 2 concurrent jobs is supported. ::: -An ingestion job is a request to pull data from your S3 bucket. The job begins immediately and provides statistics on its progress. To ingest from your Archive you need an AWS S3 Archive Source configured to access your AWS S3 bucket with the archived data. +An ingestion job is a request to pull data from your S3 bucket. The job begins immediately and provides statistics on its progress. To ingest from your archive you need an AWS S3 archive source configured to access your AWS S3 bucket with the archived data. 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Archive**. You can also click the **Go To...** menu at the top of the screen and select **Archive**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Archive**. -1. On the **Archive** page search and select the AWS S3 Archive Source that has access to your archived data. +1. On the **Archive** page search and select the AWS S3 archive source that has access to your archived data. 1. Click **New Ingestion Job** and a window appears where you: 1. Define a mandatory job name that is unique to your account. 1. Select the date and time range of archived data to ingest. A maximum of 12 hours is supported.
Archive ingest job -1. Click **Ingest Data** to begin ingestion. The status of the job is visible in the Details pane of the Source in the Archive page. +1. Click **Ingest Data** to begin ingestion. The status of the job is visible in the details pane of the source in the archive page. ### Job status @@ -235,19 +235,19 @@ An ingestion job will have one of the following statuses: * **Failed**. The job has failed to complete. Partial data may have been ingested and is searchable. * **Succeeded** The job completed ingesting and your data is searchable. -## Search ingested Archive data +## Search ingested archive data -Once your Archive data is ingested with an ingestion job you can search for it as you would any other data ingested into Sumo Logic. On the Archive page find and select the Archive S3 Source that ran the ingestion job to ingest your Archive data. In the [Details pane](#details-pane), you can click the **Open in Search** link to view the data in a Search that was ingested by the job. +Once your archive data is ingested with an ingestion job you can search for it as you would any other data ingested into Sumo Logic. On the archive page find and select the archive S3 source that ran the ingestion job to ingest your archive data. In the [details pane](#details-pane), you can click the **Open in Search** link to view the data in a search that was ingested by the job. :::note When you search for data in the Frequent or Infrequent Tier, you must explicitly reference the partition. ::: -The metadata field `_archiveJob` is automatically created in your account and assigned to ingested Archive data. This field does not count against your Fields limit. Ingested Archive data has the following metadata assignments: +The metadata field `_archiveJob` is automatically created in your account and assigned to ingested archive data. This field does not count against your fields limit. Ingested archive data has the following metadata assignments: | Field | Description | |:----------------|:-------------------------------------| -| `_archiveJob` | The name of the ingestion job assigned to ingest your Archive data. | +| `_archiveJob` | The name of the ingestion job assigned to ingest your archive data. | | `_archiveJobId` | The unique identifier of the ingestion job. | ## Audit ingestion job requests diff --git a/docs/manage/data-archiving/archive.md b/docs/manage/data-archiving/archive.md index b43c8da91a..f9008c2a5e 100644 --- a/docs/manage/data-archiving/archive.md +++ b/docs/manage/data-archiving/archive.md @@ -1,73 +1,73 @@ --- id: archive title: Archive Log Data to S3 using Installed Collectors -description: Send data to an Archive that you can ingest from later. +description: Send data to an archive that you can ingest from later. --- import useBaseUrl from '@docusaurus/useBaseUrl'; -Archive allows you to forward log data from Installed Collectors to AWS S3 buckets to collect at a later time. If you have logs that you do not need to search immediately you can archive them for later use. You can ingest from your Archive on-demand with five-minute granularity. +Archive allows you to forward log data from Installed Collectors to AWS S3 buckets to collect at a later time. If you have logs that you do not need to search immediately you can archive them for later use. You can ingest from your archive on-demand with five-minute granularity. :::important -Do not change the name and location of the archived files in your S3 bucket, otherwise ingesting them later will not work properly. +Do not change the name and location of the archived files in your S3 bucket. Otherwise, ingesting them later will not work properly. ::: -To archive your data you need a Processing Rule configured to send to an AWS Archive Destination. First, [create an AWS Archive Destination](#create-an-aws-archive-destination), then [create Archive processing rules](#create-a-processing-rule) to start archiving. Any data that matches the filter expression of an Archive processing rule is not sent to Sumo Logic, instead, it is sent to your AWS Archive Destination. +To archive your data you need a processing rule configured to send to an AWS archive destination. First, [create an AWS archive destination](#create-an-aws-archive-destination), then [create archive processing rules](#create-a-processing-rule) to start archiving. Any data that matches the filter expression of an archive processing rule is not sent to Sumo Logic. Instead, it is sent to your AWS archive destination. :::note -Every archived log message is tagged with the metadata Fields specified by the Collector and Source. +Every archived log message is tagged with the metadata fields specified by the collector and source. ::: -## Create an AWS Archive Destination +## Create an AWS archive destination :::note -You need the **Manage S3 data forwarding** role capability to create an AWS Archive Destination. +You need the [Manage S3 Data Forwarding](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability to create an AWS archive destination. ::: -1. Follow the instructions on Grant Access to an AWS Product to grant Sumo permission to send data to the destination S3 bucket. +1. Follow the instructions in [Grant Access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product/) to grant Sumo Logic permission to send data to the destination S3 bucket. 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Data Archiving**. You can also click the **Go To...** menu at the top of the screen and select **Data Archiving**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Data Archiving**. 1. Click **+** to add a new destination. 1. Select **AWS Archive bucket** for **Destination Type**.
Create a New Destination dialog 1. Configure the following: * **Destination Name**. Enter a name to identify the destination. * **Bucket Name**. Enter the exact name of the S3 bucket. - :::note - You can create only one destination with a particular bucket name.  If you try to create a new destination with the bucket name of an existing destination, the new destination replaces the old one. - ::: + :::note + You can create only one destination with a particular bucket name.  If you try to create a new destination with the bucket name of an existing destination, the new destination replaces the old one. + ::: * **Description**. You can provide a meaningful description of the connection. * **Access Method**. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred. This was completed in step 1, [Grant Sumo Logic access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product). * For **Role-based access** enter the Role ARN that was provided by AWS after creating the role. - * For **Key access** enter the **Access Key ID** and **Secret Access Key.** See [AWS Access Key ID](https://docs.aws.amazon.com/STS/latest/UsingSTS/UsingTokens.html#RequestWithSTS) and [AWS Secret Access Key](https://aws.amazon.com/iam/) for details. - * For **AWS EC2 Credentials** instance profile credentials on an EC2 instance where an installed collector will be used to archive log data to S3, see https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-roles.html. + * For **Key access** enter the **Access Key ID** and **Secret Access Key.** See [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for details. + * For **AWS EC2 Credentials** instance profile credentials on an EC2 instance where an installed collector will be used to archive log data to S3, see [Using IAM Roles to Grant Access to AWS Resources on Amazon EC2](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-roles.html). * **S3 Region**. Select the S3 region or keep the default value of Others. The S3 region must match the appropriate S3 bucket created in your Amazon account. 1. Click **Save**. If Sumo Logic is able to verify the S3 credentials, the destination will be added to the list of destinations and you can start archiving to the destination via processing rules. -## Create a Processing Rule +## Create a processing rule -A new processing rule type named **Archive messages that match** allows you to archive log data at the Source level on Installed Collectors. +A new processing rule type named **Archive messages that match** allows you to archive log data at the source level on Installed Collectors. :::note -An Archive processing rule acts like an exclude filter, functioning as a denylist filter where the matching data is not sent to Sumo Logic, and instead sends the excluded data to your AWS Archive bucket. +An archive processing rule acts like an exclude filter, functioning as a denylist filter where the matching data is not sent to Sumo Logic, and instead sends the excluded data to your AWS archive bucket. ::: Archive and forwarding rules are processed after all other processing rule types. When there are archive and forwarding rules they are processed in the order that they are specified in the UI, top to bottom. -To configure processing rules for Archive using the web application follow these steps: +To configure processing rules for archiving using the web application, follow these steps: :::note -You can use JSON to configure a processing rule, use the **Forward** filterType. See an example data forwarding rule. +To use JSON to configure a processing rule, use the `Forward` filter ype. See an example data forwarding rule. ::: 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Collection**. You can also click the **Go To...** menu at the top of the screen and select **Collection**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Collection**. -1. Search for the Source that you want to configure, and click the **Edit** link for the Source. The Source must be associated with an Installed Collector. +1. Search for the source that you want to configure, and click the **Edit** link for the source. The source must be associated with an Installed Collector. 1. Scroll down to the **Processing Rules** section and click the arrow to expand the section. 1. Click **Add Rule**. 1. Type a **Name** for this rule. (Names have a maximum of 32 characters.) 1. For **Filter**, type a regular expression that defines the messages you want to filter. The rule must match the whole message. For multi-line log messages, to get the lines before and after the line containing your text, wrap the segment with `(?s)` such as: `(?s).*matching text(?s).*` Your regex must be [RE2 compliant.](https://github.com/google/re2/wiki/Syntax) -1. Select **Archive messages that match** as the rule type. This option is visible only if you have defined at least one [**AWS Archive bucket** destination](#create-an-aws-archive-destination), as described in the previous section.  -1. Select the Destination from the dropdown menu.
Archive rule +1. Select **Archive messages that match** as the rule type. This option is visible only if you have defined at least one [AWS archive bucket destination](#create-an-aws-archive-destination), as described in the previous section.  +1. Select the destination from the dropdown menu.
Archive rule 1. (Optional) Enter a **Prefix** that matches the location to store data in the S3 bucket. The prefix has the following requirements: * It can not start with a forward slash `/`. * It needs to end with a forward slash `/`. @@ -80,7 +80,7 @@ You can use JSON to configure a processing rule, use the **Forward** filterTy ## Archive format -Forwarded Archive files are prepended with a filename prefix based on the receipt time of your data with the following format: +Forwarded archive files are prepended with a filename prefix based on the receipt time of your data with the following format: ``` dt=/hour=/minute=////v1/.txt.gzip @@ -92,7 +92,7 @@ Collector version 19.361-3+ provides the ability to archive files with five-minu v2/dt=/hour=/minute=////.txt.gzip ``` -Example format of an Archived log message: +Example format of an archived log message: ``` {"_id":"763a9b55-d545-4564-8f4f-92fe9db9acea","date":"2019-11-15T13:26:41.293Z","sourceName":"/Users/sumo/Downloads/Logs/ingest.log","sourceHost":"sumo","sourceCategory":"logfile","message":"a log line"} @@ -100,7 +100,7 @@ Example format of an Archived log message: ## Batching -By default, the Collector will complete writing logs to an archive file once the uncompressed size of the file reaches 5 GB in size. You can configure the buffer size with the following [collector.properties](/docs/send-data/installed-collectors/collector-installation-reference/collector-properties.md) parameter. +By default, the collector will complete writing logs to an archive file once the uncompressed size of the file reaches 5 GB in size. You can configure the buffer size with the following [collector.properties](/docs/send-data/installed-collectors/collector-installation-reference/collector-properties.md) parameter. ### collector.properties buffer parameter @@ -108,83 +108,83 @@ By default, the Collector will complete writing logs to an archive file once th |:--|:--|:--|:--| | buffer.max.disk.bytes | The maximum size in bytes of the on-disk buffer per archive destination.
When the maximum is reached the oldest modified file(s) are deleted. | Integer | 5368709120 | -## Ingest data from Archive +## Ingest data from archive -You can ingest a specific time range of data from your Archive at any time with an **AWS S3 Archive Source**. First, [create an AWS S3 Archive Source](#create-an-aws-s3-archivesource), then [create an ingestion job](#create-an-ingestion-job). +You can ingest a specific time range of data from your archive at any time with an AWS S3 archive source. First, [create an AWS S3 archive source](#create-an-aws-s3-archivesource), then [create an ingestion job](#create-an-ingestion-job). ### Rules * A maximum of 2 concurrent ingestion jobs is supported. If more jobs are needed contact your Sumo Logic account representative. * An ingestion job has a maximum time range of 12 hours. If a longer time range is needed contact your Sumo Logic account representative. * Filenames or object key names must be in either of the following formats: - * Sumo Logic [Archive format](#archive-format) + * Sumo Logic [archive format](#archive-format) * `prefix/dt=YYYYMMDD/hour=HH/fileName.json.gz` -* If the logs from Archive do not have timestamps they are only searchable by receipt time. -* If a Field is tagged to an archived log message and the ingesting Collector or Source has a different value for the Field, the field values already tagged to the archived log take precedence. -* If the Collector or Source that Archived the data is deleted the ingesting Collector and Source metadata Fields are tagged to your data. -* You can create ingestion jobs for the same time range, however, jobs maintain a 10 day history of ingested data and any data resubmitted for ingestion within 10 days of its last ingestion will be automatically filtered so it's not ingested. +* If the logs from archive do not have timestamps they are only searchable by receipt time. +* If a field is tagged to an archived log message and the ingesting collector or source has a different value for the field, the field values already tagged to the archived log take precedence. +* If the collector or source that archived the data is deleted the ingesting collector and source metadata fields are tagged to your data. +* You can create ingestion jobs for the same time range. However, jobs maintain a 10 day history of ingested data, and any data resubmitted for ingestion within 10 days of its last ingestion will be automatically filtered so it's not ingested. -### Create an AWS S3 Archive Source +### Create an AWS S3 archive source :::note -You need the **Manage Collectors** role capability to create an AWS S3 Archive Source. +You need the [Manage Collectors](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability to create an AWS S3 archive source. ::: -An AWS S3 Archive Source allows you to ingest your Archived data. Configure it to access the AWS S3 bucket that has your Archived data. +An AWS S3 archive source allows you to ingest your archived data. Configure it to access the AWS S3 bucket that has your archived data. :::note -To use JSON to create an AWS S3 Archive Source reference our AWS Log Source parameters and use `AwsS3ArchiveBucket` as the value for `contentType`. +To use JSON to create an AWS S3 archive source, reference our AWS Log source parameters and use `AwsS3ArchiveBucket` as the value for `contentType`. ::: 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Collection**. You can also click the **Go To...** menu at the top of the screen and select **Collection**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Collection**. 1. On the **Collectors** page, click **Add Source** next to a Hosted Collector, either an existing Hosted Collector or one you have created for this purpose. 1. Select **AWS S3 Archive**.
Archive icon -1. Enter a name for the new Source. A description is optional. +1. Enter a name for the new source. A description is optional. 1. Select an **S3 region** or keep the default value of **Others**. The S3 region must match the appropriate S3 bucket created in your Amazon account. 1. For **Bucket Name**, enter the exact name of your organization's S3 bucket. Be sure to double-check the name as it appears in AWS. -1. For **Path Expression**, enter the wildcard pattern that matches the Archive files you'd like to collect. The pattern: +1. For **Path Expression**, enter the wildcard pattern that matches the archive files you'd like to collect. The pattern: * can use one wildcard (\*). * can specify a [prefix](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-keys) so only certain files from your bucket are ingested. For example, if your filename is `prefix/dt=/hour=/minute=///v1/.txt.gzip`, you could use `prefix*` to only ingest from those matching files. * can **NOT** use a leading forward slash. * can **NOT** have the S3 bucket name. 1. For **Source Category**, enter any string to tag to the - data collected from this Source. Category metadata is stored in a - searchable field called _sourceCategory. -1. **Fields**. Click the **+Add Field** link to add custom metadata Fields. Define the fields you want to associate, each field needs a name (key) and value. + data collected from this source. Category metadata is stored in a + searchable field called `_sourceCategory`. +1. **Fields**. Click the **+Add Field** link to add custom metadata fields. Define the fields you want to associate, each field needs a name (key) and value. :::note - Fields specified on an AWS S3 Archive Source take precedence if the archived data has the same fields. + Fields specified on an AWS S3 archive source take precedence if the archived data has the same fields. ::: - * green check circle.png A green circle with a check mark is shown when the field exists and is enabled in the Fields table schema. - * orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist, or is disabled, in the Fields table schema. In this case, an option to automatically add or enable the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo that does not exist in the Fields schema or is disabled it is ignored, known as dropped. -1. For **AWS Access** you have two **Access Method** options. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred, this was completed in the prerequisite step Grant Sumo Logic access to an AWS Product. + * green check circle.png A green circle with a check mark is shown when the field exists and is enabled in the fields table schema. + * orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist, or is disabled, in the fields table schema. In this case, an option to automatically add or enable the nonexistent fields to the fields table schema is provided. If a field is sent to Sumo Logic that does not exist in the fields schema or is disabled it is ignored, known as dropped. +1. For **AWS Access** you have two **Access Method** options. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred, this was completed in the prerequisite step [Grant Sumo Logic access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product/). * For **Role-based access**, enter the Role ARN that was provided by AWS after creating the role.  - * For **Key access** enter the **Access Key ID **and** Secret Access Key.** See [AWS Access Key ID](https://docs.aws.amazon.com/STS/latest/UsingSTS/UsingTokens.html#RequestWithSTS) and [AWS Secret Access Key](https://aws.amazon.com/iam/) for details. -1. Create any Processing Rules you'd like for the AWS Source. -1. When you are finished configuring the Source, click **Save**. + * For **Key access** enter the **Access Key ID **and** Secret Access Key.** See [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for details. +1. Create any processing rules you'd like for the AWS source. +1. When you are finished configuring the source, click **Save**. ## Archive page :::important -You need the Manage or View Collectors role capability to manage or view Archive. +You need the [Manage Collectors or View Collectors](/docs/manage/users-roles/roles/role-capabilities/#data-management) role capability to manage or view archive. ::: -The Archive page provides a table of all the existing [AWS S3 Archive Sources](#create-an-aws-s3-archivesource) in your account and ingestion jobs. +The archive page provides a table of all the existing [AWS S3 archive sources](#create-an-aws-s3-archivesource) in your account and ingestion jobs. -[**New UI**](/docs/get-started/sumo-logic-ui/). To access the Archive page, in the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Archive**. You can also click the **Go To...** menu at the top of the screen and select **Archive**. +[**New UI**](/docs/get-started/sumo-logic-ui/). To access the archive page, in the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Archive**. You can also click the **Go To...** menu at the top of the screen and select **Archive**. -[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the Archive page, in the main Sumo Logic menu select **Manage Data > Collection > Archive**. +[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To access the archive page, in the main Sumo Logic menu select **Manage Data > Collection > Archive**. Archive page ### Details pane -Click on a table row to view the Source details. This includes: +Click on a table row to view the source details. This includes: * **Name** * **Description** * **AWS S3 bucket** -* All **Ingestion jobs** that are and have been created on the Source. - * Each ingestion job shows the name, time window, and volume of data processed by the job. Click the icon Open in search icon to the right of the job name to start a Search against the data that was ingested by the job. +* All **Ingestion jobs** that are and have been created on the source. + * Each ingestion job shows the name, time window, and volume of data processed by the job. Click the icon Open in search icon to the right of the job name to start a search against the data that was ingested by the job. * Hover your mouse over the information icon to view who created the job and when.
Archive details pane ## Create an ingestion job @@ -193,14 +193,14 @@ Click on a table row to view the Source details. This includes: A maximum of 2 concurrent jobs is supported. ::: -An ingestion job is a request to pull data from your S3 bucket. The job begins immediately and provides statistics on its progress. To ingest from your Archive you need an AWS S3 Archive Source configured to access your AWS S3 bucket with the archived data. +An ingestion job is a request to pull data from your S3 bucket. The job begins immediately and provides statistics on its progress. To ingest from your archive you need an AWS S3 archive source configured to access your AWS S3 bucket with the archived data. 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Archive**. You can also click the **Go To...** menu at the top of the screen and select **Archive**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Archive**. -1. On the **Archive** page search and select the AWS S3 Archive Source that has access to your archived data. +1. On the **Archive** page search and select the AWS S3 archive source that has access to your archived data. 1. Click **New Ingestion Job** and a window appears where you: 1. Define a mandatory job name that is unique to your account. 1. Select the date and time range of archived data to ingest. A maximum of 12 hours is supported.
Archive ingest job -1. Click **Ingest Data** to begin ingestion. The status of the job is visible in the Details pane of the Source in the Archive page. +1. Click **Ingest Data** to begin ingestion. The status of the job is visible in the details pane of the source in the archive page. ### Job status @@ -208,23 +208,23 @@ An ingestion job will have one of the following statuses: * **Pending**. The job is queued before scanning has started. * **Scanning**. The job is actively scanning for objects from your S3 bucket. Your objects could be ingesting in parallel. -* **Ingesting* The job has completed scanning for objects and is still ingesting your objects. +* **Ingesting** The job has completed scanning for objects and is still ingesting your objects. * **Failed**. The job has failed to complete. Partial data may have been ingested and is searchable. * **Succeeded** The job completed ingesting and your data is searchable. -## Search ingested Archive data +## Search ingested archive data -Once your Archive data is ingested with an ingestion job you can search for it as you would any other data ingested into Sumo Logic. On the Archive page find and select the Archive S3 Source that ran the ingestion job to ingest your Archive data. In the [Details pane](#details-pane), you can click the **Open in Search** link to view the data in a Search that was ingested by the job. +Once your archive data is ingested with an ingestion job you can search for it as you would any other data ingested into Sumo Logic. On the archive page find and select the archive S3 source that ran the ingestion job to ingest your archive data. In the [details pane](#details-pane), you can click the **Open in Search** link to view the data in a search that was ingested by the job. :::note When you search for data in the Frequent or Infrequent Tier, you must explicitly reference the partition. ::: -The metadata field `_archiveJob` is automatically created in your account and assigned to ingested Archive data. This field does not count against your Fields limit. Ingested Archive data has the following metadata assignments: +The metadata field `_archiveJob` is automatically created in your account and assigned to ingested archive data. This field does not count against your fields limit. Ingested archive data has the following metadata assignments: | Field | Description | |:----------------|:-------------------------------------| -| `_archiveJob` | The name of the ingestion job assigned to ingest your Archive data. | +| `_archiveJob` | The name of the ingestion job assigned to ingest your archive data. | | `_archiveJobId` | The unique identifier of the ingestion job. | ## Audit ingestion job requests diff --git a/docs/manage/data-archiving/index.md b/docs/manage/data-archiving/index.md index 17fd248740..c0b3f3af44 100644 --- a/docs/manage/data-archiving/index.md +++ b/docs/manage/data-archiving/index.md @@ -8,7 +8,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; icon -Archive allows you to forward log data from Installed Collectors to AWS S3 buckets to collect later. If you have logs that you do not need to search immediately, you can archive them for later use. You can also ingest them from your Archive on-demand with five-minute granularity. +Archive allows you to forward log data from Installed Collectors to AWS S3 buckets to collect later. If you have logs that you do not need to search immediately, you can archive them for later use. You can also ingest them from your archive on-demand with five-minute granularity. import DocCardList from '@theme/DocCardList'; import {useCurrentSidebarCategory} from '@docusaurus/theme-common'; diff --git a/docs/manage/data-forwarding/forward-data-from-sumologic.md b/docs/manage/data-forwarding/forward-data-from-sumologic.md index 52d594970c..826920c6c6 100644 --- a/docs/manage/data-forwarding/forward-data-from-sumologic.md +++ b/docs/manage/data-forwarding/forward-data-from-sumologic.md @@ -10,13 +10,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; Data forwarding to Google Cloud Storage (GCS) is available and is an on-demand feature. To enable access, contact your Sumo Logic executive or [Support](https://support.sumologic.com/support/s). ::: -This document outlines the instructions that needs to be followed to forward log data from a [partition](/docs/manage/partitions) or [Scheduled View](/docs/manage/scheduled-views) to an S3 or Google Cloud Storage (GCS) bucket. Only new data is forwarded from a partition or Scheduled View once it is set to forward data.  +This document outlines the instructions that needs to be followed to forward log data from a [partition](/docs/manage/partitions) or [scheduled view](/docs/manage/scheduled-views) to an S3 or Google Cloud Storage (GCS) bucket. Only new data is forwarded from a partition or scheduled view once it is set to forward data.  To forward data to a storage bucket: 1. [Configure forwarding destination](#configure-data-forwarding-destination). 1. [Forward data to destination](#forward-datato-forwarding-destination) from a partition or schedule view. -After data forwarding is configured, you should start to see file objects posted within your configured bucket. If your Scheduled View conducts aggregation, which is a best practice, your aggregate fields are automatically appended to the forwarded objects. +After data forwarding is configured, you should start to see file objects posted within your configured bucket. If your scheduled view conducts aggregation, which is a best practice, your aggregate fields are automatically appended to the forwarded objects. :::note Data forwarding is not currently supported for data assigned to the Infrequent Tier.  @@ -26,11 +26,11 @@ Data forwarding is not currently supported for data assigned to the Infrequent T * An administrator role on the partition where you want to set up forwarding. * Follow the instructions on [Grant Access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product) to grant Sumo Logic permission to send data to the destination S3 bucket. -* A partition or Scheduled View to push to Amazon S3 or Google Cloud Storage (GCS). +* A partition or scheduled view to push to Amazon S3 or Google Cloud Storage (GCS). ## Forwarding interval  -Messages are buffered during data ingest for either **approximately** five minutes or until 100MB of data is received, whichever is first. Then the buffered data is written to a new CSV file and forwarded after compression.  +Messages are buffered during data ingest for either approximately five minutes or until 100MB of data is received, whichever is first. Then the buffered data is written to a new CSV file and forwarded after compression.  The limits mentioned here are upper limits. Actual file size may vary depending on the ingestion volume in scheduled views or partitions of an account.  @@ -54,13 +54,13 @@ These file objects will contain the messages received as well as the system met * **sourceCategory**: Is returned blank. * **messageTime**: The parsed message time from the log message, as epoch. * **receiptTime**: The time the service originally received the message, as epoch. -* **sourceID**: The unique ID of the Source configured to send the message to the service. -* **collectorId**: The unique ID of the Collector configured to send the message to the service. -* **count**: The message number from the specific log Source Name. These should be sequential for a specific Source file. +* **sourceID**: The unique ID of the source configured to send the message to the service. +* **collectorId**: The unique ID of the collector configured to send the message to the service. +* **count**: The message number from the specific log source name. These should be sequential for a specific source file. * **format**: The timestamp format used to parse the message time from the log message. * **view**: The scheduled view or partition that the message is forwarded from. * **encoding**: The encoding of the original file contents. -* **message**: The raw log message as read from the original Source. +* **message**: The raw log message as read from the original source. * **field**: Aggregate fields are added based on your query. ### Ordering of fields in forwarded file @@ -94,7 +94,7 @@ Where: ## Configure data forwarding destination -Before you can [forward data](#forward-datato-forwarding-destination) from a partition or Scheduled View, you must create a destination that indicates the S3 or GCS bucket where you want to send the forwarded data. +Before you can [forward data](#forward-datato-forwarding-destination) from a partition or scheduled view, you must create a destination that indicates the S3 or GCS bucket where you want to send the forwarded data. 1. [**New UI**](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu select **Manage Data**, and then under **Logs** select **Data Forwarding**. You can also click the **Go To...** menu at the top of the screen and select **Data Forwarding**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Logs > Data Forwarding**. 1. Click **+ Add Destination** to add a new destination. @@ -106,42 +106,42 @@ Before you can [forward data](#forward-datato-forwarding-destination) from a par :::note You can create only one destination with a particular bucket name.  If you try to create a new destination with the bucket name of an existing destination, the new destination replaces the old one. ::: - 1. (Optional)**Description**. You can provide a meaningful description of the connection. + 1. (Optional) **Description**. You can provide a meaningful description of the connection. 1. **Access Method**. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred. This was completed in the prerequisite step [Grant Access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product). * For **Role-based access** enter the Role ARN that was provided by AWS after creating the role. - * For **Key access** enter the **Access Key ID** and **Secret Access Key**. See [Managing access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for details. + * For **Key access** enter the **Access Key ID** and **Secret Access Key**. See [Manage access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for details. 1. **S3 Region**. Select the S3 region or keep the default value of Others. The S3 region must match the appropriate S3 bucket created in your Amazon account. 1. **Enable S3 server-side encryption**. Select the check box if you want the forwarded data to be encrypted. For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) in AWS help. - For **Google Cloud Storage** as the destination type, follow the below steps:
Create S3 Destination popup 1. **Destination Name**. Enter a name to identify the destination. - 1. **Bucket Name**. Enter the [exact name of the GCS bucket](https://cloud.google.com/storage/docs/buckets). + 1. **Bucket Name**. Enter the [exact name of the S3 or GCS bucket](https://cloud.google.com/storage/docs/buckets). :::note You can create only one destination with a particular bucket name.  If you try to create a new destination with the bucket name of an existing destination, the new destination replaces the old one. ::: 1. (Optional) **Description**. Provide a meaningful description of the connection. - 1. For **HMAC Access Key** and **HMAC Secret Key** enter the values collected from the Google platform service account. See [Manage HMAC keys for service account](https://cloud.google.com/storage/docs/authentication/managing-hmackeys#console_1) for details. + 1. For **HMAC Access Key** and **HMAC Secret Key** enter the values collected from the Google platform service account. See [Manage HMAC keys for service account](https://cloud.google.com/storage/docs/authentication/managing-hmackeys) for details. 1. **Active**. Select this check box to enable data forwarding for the entire bucket. To start forwarding data, you will also need to enable forwarding for the desired indexes, as described below. 1. Click **Save**.
If Sumo Logic is able to verify the credentials, the destination will be added to the list of destinations. If the destination is not added successfully, see [Error and alert conditions](#error-and-alert-conditions) for examples of errors that can occur. -Once the destination is created, you can start data forwarding for specific partitions or Scheduled Views as described in [Forward data to an forwarding destination](#forward-datato-forwarding-destination) below. +Once the destination is created, you can start data forwarding for specific partitions or scheduled views as described in [Forward data to forwarding destination](#forward-datato-forwarding-destination) below. ## Forward data to forwarding destination -Once you [configure date forwarding destination](#configure-data-forwarding-destination) that indicates the bucket to receive the data, you can forward data to the destination from partitions and Scheduled Views. +Once you [configure the data forwarding destination](#configure-data-forwarding-destination) that indicates the bucket to receive the data, you can forward data to the destination from partitions and scheduled views. -1. Depending on whether you want to forward data from a partition or a Scheduled View: +1. Depending on whether you want to forward data from a partition or a scheduled view: * **Partition**:
[**New UI**](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu select **Manage Data**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Logs > Partitions**. - * **Scheduled View**:
[**New UI**](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu select **Manage Data**, and then under **Logs** select **Scheduled Views**. You can also click the **Go To...** menu at the top of the screen and select **Scheduled Views**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Logs > Scheduled Views**. -1. Select the partition or Scheduled View for which you want to enable data forwarding and click the **Edit** button. The edit dialog for the partition or Scheduled View displays. Following is the edit dialog for a partition.
Enable Data Forwarding checkbox + * **Scheduled view**:
[**New UI**](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu select **Manage Data**, and then under **Logs** select **Scheduled Views**. You can also click the **Go To...** menu at the top of the screen and select **Scheduled Views**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Logs > Scheduled Views**. +1. Select the partition or scheduled view for which you want to enable data forwarding and click the **Edit** button. The edit dialog for the partition or scheduled view displays. Following is the edit dialog for a partition.
Enable Data Forwarding checkbox :::tip - In addition to forwarding data from existing partitions and Scheduled Views, you can also enable data forwarding by selecting the **Enable Data Forwarding** check box when you first [create a partition](/docs/manage/partitions/flex/create-edit-partition-flex/) or [create a Scheduled View](/docs/manage/scheduled-views/add-scheduled-view/). + In addition to forwarding data from existing partitions and scheduled views, you can also enable data forwarding by selecting the **Enable Data Forwarding** check box when you first [create a partition](/docs/manage/partitions/flex/create-edit-partition-flex/) or [create a scheduled view](/docs/manage/scheduled-views/add-scheduled-view/). ::: 1. Click the **Enable Data Forwarding** checkbox. More options appear. 1. **Destination Type**. You can either select **Amazon S3** or **Google Cloud Storage** as your destination type. - For **Amazon S3** as the destination type, follow the below steps:
Forwarding destination options 1. **Forwarding Destination**. Choose one of the following: * **Existing Amazon S3 Destination**. If you select this option, select the destination in the **Amazon S3 Destination** field below. - * **New Amazon S3 Destination**. Follow the instructions in [Configure data forwarding destination](#forward-datato-forwarding-destination) above to create a new S3 destination. + * **New Amazon S3 Destination**. Follow the instructions in [Configure data forwarding destination](#configure-data-forwarding-destination) above to create a new S3 destination. 1. **Amazon S3 Destination**. If you chose **Existing Amazon S3 Destination** for the forwarding destination, select the destination here. - For **Google Cloud Storage** as the destination type, follow the below steps:
Forwarding destination options 1. **Forwarding Destination**. Choose one of the following: @@ -178,13 +178,13 @@ Let's say you want to take data from Sumo Logic and run additional analysis on i Let's suppose you have an S3 or GCS bucket named `demo-bucket1` where you want to forward your Sumo Logic data. Do the following: 1. [Create a destination](/docs/manage/data-forwarding/forward-data-from-sumologic/#configure-data-forwarding-destination) that points to the `demo-bucket1` bucket. For example, name it **Test destination**. -1. Open the partition or Scheduled View whose data you want to [forward data to the new destination](/docs/manage/data-forwarding/forward-data-from-sumologic/#configure-data-forwarding-destination). -1. In the partition or Scheduled View, select **Enable Data Forwarding**, and fill out the fields that appear: +1. Open the partition or scheduled view whose data you want to [forward data to the new destination](/docs/manage/data-forwarding/forward-data-from-sumologic/#configure-data-forwarding-destination). +1. In the partition or scheduled view, select **Enable Data Forwarding**, and fill out the fields that appear: 1. In **Destination Type** select **Amazon S3** or **Google Cloud Storage** depending on your requirement. 1. In **Forwarding Destination** select any **Existing Destination**. 1. In **Destination** select the name of the destination you created earlier, for example, **Test destination**. 1. Use the **Data Forwarding Configuration** section to specify whether to forward only log data, log data with metadata, or log data with metadata and enriched fields. -1. Click **Save** on the partition or Scheduled View. The data will start forwarding to the selected destination bucket specified in the destination. +1. Click **Save** on the partition or scheduled view. The data will start forwarding to the selected destination bucket specified in the destination. ## Error and alert conditions diff --git a/docs/manage/data-forwarding/index.md b/docs/manage/data-forwarding/index.md index 852079bb5d..5f989cce0a 100644 --- a/docs/manage/data-forwarding/index.md +++ b/docs/manage/data-forwarding/index.md @@ -8,7 +8,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; Document with a forward symbol icon -Data Forwarding allows you to forward log data to an external server or supported storage service. You can forward log data to an AWS S3 bucket or Google Cloud Storage (GCS) through [Partitions](/docs/manage/partitions) or [Scheduled Views](/docs/manage/scheduled-views). +Data Forwarding allows you to forward log data to an external server or supported storage service. You can forward log data to an AWS S3 bucket or Google Cloud Storage (GCS) through [partitions](/docs/manage/partitions) or [scheduled views](/docs/manage/scheduled-views). ## Guide contents diff --git a/docs/manage/data-forwarding/installed-collectors.md b/docs/manage/data-forwarding/installed-collectors.md index d71dde9a78..9f098520e8 100644 --- a/docs/manage/data-forwarding/installed-collectors.md +++ b/docs/manage/data-forwarding/installed-collectors.md @@ -1,13 +1,13 @@ --- id: installed-collectors title: Forward Data from an Installed Collector -description: Learn how to set up Data Forwarding destinations for Installed Collectors. +description: Learn how to set up data forwarding destinations for Installed Collectors. --- import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import useBaseUrl from '@docusaurus/useBaseUrl'; -You can set up one or more data forwarding destinations and configure an Installed Collector to send raw log data from specified Sources to those destinations. The Collector will send the raw data to external destinations at the same time it sends data to Sumo. +You can set up one or more data forwarding destinations and configure an Installed Collector to send raw log data from specified sources to those destinations. The collector will send the raw data to external destinations at the same time it sends data to Sumo Logic. You can forward raw log data using the following protocols. @@ -15,9 +15,9 @@ You can forward raw log data using the following protocols. * Generic REST API—Send log data to a web services endpoint. * Hitachi Data Systems HTTP REST API—Send log data to Hitachi Content Platform (HCP). -Follow the steps below to set up a Collector to forward raw log data to an external destination. +Follow the steps below to set up a collector to forward raw log data to an external destination. -You can set up Installed Collector data forwarding when you first configure Sources or at a later time. If you apply rules at a later time, keep in mind that they are not applied retroactively. +You can set up Installed Collector data forwarding when you first configure sources or at a later time. If you apply rules at a later time, keep in mind that they are not applied retroactively. :::note Data forwarding processing rules are processed after all other [processing rules](/docs/send-data/collection/processing-rules). @@ -25,7 +25,7 @@ Data forwarding processing rules are processed after all other [processing rules ## Step 1: Configure data forwarding destination -You need the [Manage Collectors role capability](../users-roles/roles/role-capabilities.md) to create a data forwarding destination. +You need the [Manage Collectors role capability](/docs/manage/users-roles/roles/role-capabilities/#data-management) to create a data forwarding destination. To set up a data forwarding destination: @@ -60,14 +60,14 @@ Follow the instructions for the destination type you chose. * `{second}` Replace with second in hour. * `{uuid}` Replace with a unique, randomly generated identifier (UUID) - * **Username and Password**. Enter the credentials to access the destination. These are placed in a Basic Auth header in the HTTP request from the Collector. If you're sending to a Sumo Logic HTTP Source this header is simply ignored and your data is ingested. You must have administrator privileges for the Collector. + * **Username and Password**. Enter the credentials to access the destination. These are placed in a Basic Auth header in the HTTP request from the collector. If you're sending to a Sumo Logic HTTP source this header is simply ignored and your data is ingested. You must have administrator privileges for the collector. * **Protocol**. Select the protocol (TCP or UDP) for sending the syslog messages. -* **Host**. Fully qualified hostname of the target Syslog server. +* **Host**. Fully qualified hostname of the target syslog server. * **Port**. Enter the port for sending the syslog messages. * **Token**. Enter the token to prepend when forwarding a message via syslog. The token uses the following special variables: * `{file}` Maps to the name of the originating file, when applicable. @@ -78,15 +78,15 @@ Follow the instructions for the destination type you chose. :::note -Data forwarding to S3 Archive locations will forward log data from Installed Collectors to AWS S3 buckets to collect at a later time. Data **will not** be forked to both Sumo Logic and AWS S3. In that case, you will want to send the data to Sumo Logic first and then configure [Forwarding Data from Sumo Logic to S3](/docs/manage/data-forwarding/forward-data-from-sumologic/). +Data forwarding to S3 Archive locations will forward log data from Installed Collectors to AWS S3 buckets to collect at a later time. Data **will not** be forked to both Sumo Logic and AWS S3. In that case, you will want to send the data to Sumo Logic first and then configure [forwarding data from Sumo Logic to S3](/docs/manage/data-forwarding/forward-data-from-sumologic/). ::: -* **Bucket Name**. Enter the exact name of the S3 bucket.You can create only one destination with a particular bucket name. If you try to create a new destination with the bucket name of an existing destination, the new destination replaces the old one. +* **Bucket Name**. Enter the exact name of the S3 bucket. You can create only one destination with a particular bucket name. If you try to create a new destination with the bucket name of an existing destination, the new destination replaces the old one. * **Description**. (Optional) -* **S3 Region**. Select the S3 region or keep the default value of Others. The S3 region must match the appropriate S3 bucket created in your Amazon account. -* **AWS Access**. Select Role-based access or Key access based on the AWS authentication you are providing. Role-based access is preferred, this was completed in the prerequisite step Grant Sumo Logic access to an AWS Product. - * For Role-based access, enter the Role ARN that was provided by AWS after creating the role. - * For Key access enter the Access Key ID and Secret Access Key. See [AWS Access Key ID](https://docs.aws.amazon.com/STS/latest/UsingSTS/UsingTokens.html#RequestWithSTS) and [AWS Secret Access Key](https://aws.amazon.com/iam/) for details. +* **S3 Region**. Select the S3 region or keep the default value of **Others**. The S3 region must match the appropriate S3 bucket created in your Amazon account. +* **AWS Access**. Select **Role-based access** or **Key access** based on the AWS authentication you are providing. Role-based access is preferred, this was completed in the prerequisite step [Grant Access to an AWS Product](/docs/send-data/hosted-collectors/amazon-aws/grant-access-aws-product). + * For **Role-based access**, enter the Role ARN that was provided by AWS after creating the role. + * For **Key access** enter the Access Key ID and Secret Access Key. See [Manage access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for details. @@ -94,14 +94,14 @@ Data forwarding to S3 Archive locations will forward log data from Installed Col ## Step 2: Configure processing rules for data forwarding -In this procedure, you define one or more processing rules that define the raw log data from a Source that you want to send to the external destination. Data forwarding processing rules are processed after all other processing rules. +In this procedure, you define one or more processing rules that define the raw log data from a source that you want to send to the external destination. Data forwarding processing rules are processed after all other processing rules. There are several methods you can use to configure processing rules:  * In Sumo Logic - See the instructions below. -* With JSON - See [Creating Processing Rules Using a JSON File](/docs/send-data/use-json-configure-sources).  -* Collector Management API - See [Collector Management API](/docs/api/collector-management) for instructions on using the API to configure sources for Data Forwarding. -* Local Source configuration files -  See [Local File Configuration Management](/docs/send-data/use-json-configure-sources/local-configuration-file-management) for general information on managing sources using local file configuration. +* With JSON - See [Use JSON to Configure Sources](/docs/send-data/use-json-configure-sources).  +* Collector Management API - See [Collector Management APIs](/docs/api/collector-management) for instructions on using the API to configure sources for data forwarding. +* Local source configuration files -  See [Local Configuration File Management](/docs/send-data/use-json-configure-sources/local-configuration-file-management) for general information on managing sources using local file configuration. **To configure processing rules for data forwarding using the web application** @@ -110,7 +110,7 @@ There are several methods you can use to configure processing rules:  1. Scroll down to the **Processing Rules** section and click the arrow to expand the section. 1. Click **Add Rule**. 1. Enter a name to define the rule. -1. In the Filter field, enter the regular expression that defines the messages you want to forward. The regular expression must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax). For example, the regular expression `.*ERROR.*` matches all messages that contain ERROR. For more information about creating processing rules, see [Create a Processing Rule](/docs/send-data/collection/processing-rules/create-processing-rule). +1. In the **Filter** field, enter the regular expression that defines the messages you want to forward. The regular expression must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax). For example, the regular expression `.*ERROR.*` matches all messages that contain ERROR. For more information about creating processing rules, see [Create a Processing Rule](/docs/send-data/collection/processing-rules/create-processing-rule). 1. Select **Forward messages that match** as the rule type. This option is visible only if you have defined at least one data forwarding destination, as described in the previous section.  1. Select the Destination from the dropdown menu. If a **Syslog Destination Type** is selected, an option to select **Transparent Forwarding** is provided. Syslog forwarding by default prepends a timestamp and hostname to messages to ensure they comply with RFC 3164. If your syslog messages already comply, you can enable **Transparent Forwarding** to disable the default prepending behavior.
Transparent Forwarding toggle 1. Click **Apply**. The new rule is listed along with any other previously defined processing rules. @@ -119,7 +119,7 @@ There are several methods you can use to configure processing rules:  ## Configuring the size of forwarded syslog messages -In accordance with RFC 3164, by default the Collector forwards syslog messages in 1024-byte segments, sending each segment as a separate message. To change the segment size, add the `forwarding.syslog.maxMessageSize` property to the Collector's `collector.properties` file (in the Collector's config directory) and restart the Collector. Specify the desired size in bytes. For example: +In accordance with RFC 3164, by default the collector forwards syslog messages in 1024-byte segments, sending each segment as a separate message. To change the segment size, add the `forwarding.syslog.maxMessageSize` property to the collector's `collector.properties` file (in the collector's config directory) and restart the collector. Specify the desired size in bytes. For example: ``` forwarding.syslog.maxMessageSize = 2048 @@ -127,24 +127,23 @@ forwarding.syslog.maxMessageSize = 2048 ## Configure data forwarding queue size -In Collector version 19.216-22 and later, in-memory storage of an Installed Collector’s data forwarding queue is backed by disk storage. When the in-memory queue reaches a given size, the Collector extends the queue on disk. +In collector version 19.216-22 and later, in-memory storage of an Installed Collector’s data forwarding queue is backed by disk storage. When the in-memory queue reaches a given size, the collector extends the queue on disk. -Sumo allocates memory and disk storage for data to be forwarded to REST and TCP syslog destinations. By default, Sumo allocates: +Sumo Logic allocates memory and disk storage for data to be forwarded to REST and TCP syslog destinations. By default, Sumo Logic allocates: * 8MB of memory and 500MB of disk storage for each syslog destination.
- - **Note** Data forwarding using UDP isn't queued. - - + :::note + Data forwarding using UDP isn't queued. + ::: * 8MB of memory and 500MB of disk storage for each REST endpoint.   -You can add properties to the collector.properties file, in the Collector's /config directory, to specify how much memory and disk the data forwarding queue can consume. The limits you specify for a destination type will apply to each destination of that type. +You can add properties to the `collector.properties` file, in the collector's `/config` directory, to specify how much memory and disk the data forwarding queue can consume. The limits you specify for a destination type will apply to each destination of that type. -After the memory and disk limits are reached, data will be dropped, so the limits should not be set too low +After the memory and disk limits are reached, data will be dropped, so the limits should not be set too low. | Property | Description | | :-- | :-- | | `queue.rest.max.memory.mb` | Specifies the amount of memory allocated to the data forwarding queue for each REST destination.

Default: 8MB | | `queue.rest.max.disk.mb` | Specifies the amount of disk space allocated to the data forwarding queue for each REST destination.

Default: 500MB | -| `queue.syslog.max.memory.mb` | Specifies the amount of memory allocated to the data forwarding queue for each Syslog destination.

Default: 8MB | -| `queue.syslog.max.disk.mb` | Specifies the amount of disk space allocated to the data forwarding queue for each Syslog destination.

Default: 500MB | +| `queue.syslog.max.memory.mb` | Specifies the amount of memory allocated to the data forwarding queue for each syslog destination.

Default: 8MB | +| `queue.syslog.max.disk.mb` | Specifies the amount of disk space allocated to the data forwarding queue for each syslog destination.

Default: 500MB | diff --git a/docs/manage/data-forwarding/manage.md b/docs/manage/data-forwarding/manage.md index 096a5bcce1..8aa77b9d07 100644 --- a/docs/manage/data-forwarding/manage.md +++ b/docs/manage/data-forwarding/manage.md @@ -24,7 +24,7 @@ The following actions are available on the **Data Forwarding** page. Hover over * Click the **Edit** icon to make changes to the configuration. * Click the **Delete** icon to delete the destination. If the destination is currently active, you must deactivate it before it deleting it, as described in the following section. -## Activate or Deactivate Data Forwarding +## Activate or deactivate data forwarding If you’d like to start or stop forwarding data, you can activate or deactivate the S3 bucket.  diff --git a/docs/manage/data-forwarding/view-list-data-forwarding.md b/docs/manage/data-forwarding/view-list-data-forwarding.md index 76f357a3eb..4c27972d1f 100644 --- a/docs/manage/data-forwarding/view-list-data-forwarding.md +++ b/docs/manage/data-forwarding/view-list-data-forwarding.md @@ -13,9 +13,9 @@ The page talks about viewing information about the data forwarding configured fo :::note You can see the suggestions only if there are two or more responses for the same column or section. ::: - * **Status**. Indicates whether the data forwarding is currently Active or Inactive. + * **Status**. Indicates whether the data forwarding is currently active or inactive. * **Destination Name**. The name used to identify the destination. - * **Data Sources**. Indicates the number of indexes (Partition or Scheduled View) the destination is linked to. + * **Data Sources**. Indicates the number of indexes (partition or scheduled view) the destination is linked to. * **Description**. A meaningful description of the connection. 1. To view details of a data forwarding configuration, click any destination name of your choice. 1. A pane pops up on the right side of the page with the following information. @@ -23,14 +23,14 @@ The page talks about viewing information about the data forwarding configured fo * **Destination Name**. The name used to identify the destination. * **Description**. A meaningful description of the connection. * **Bucket Name**. The exact name of the S3 bucket. - * **Status**. Indicates whether the data forwarding is currently Active or Inactive. + * **Status**. Indicates whether the data forwarding is currently active or inactive. * **Access Method**. Indicates the type of access based on the AWS authentication provided to access the S3 bucket. - * **AWS Access key**. Access key details gathered from the AWS used for the Key access method. - * **AWS Secret key**. Secret key details gathered from the AWS used for the Key access method. - * **Role ARN**. Role ARN provided by AWS after creating the role, used for Role-based access method. + * **AWS Access key**. Access key details gathered from the AWS used for the key access method. + * **AWS Secret key**. Secret key details gathered from the AWS used for the key access method. + * **Role ARN**. Role ARN provided by AWS after creating the role, used for role-based access method. * **S3 Region**. Indicates the AWS S3 region in which the bucket is hosted. * **S3 Server-Side Encryption**. Provides the encryption details of the forwarded data.
Data forwarding pane * **Details** - * **Data Sources**. Indicates the list of sources (Partition or Scheduled View) from which the log data is forwarded to an S3 bucket. + * **Data Sources**. Indicates the list of sources (partition or scheduled view) from which the log data is forwarded to an S3 bucket. * **Data forwarded**. Provides the breakdown information about the data forwarded and indicates the total data forwarded to the given S3 bucket for the selected time. * **Query/Routing Expression**. Indicates the query for scheduled views and routing expression for partitions for which the data is forwarded.
Data forwarding details diff --git a/docs/manage/field-extractions/create-field-extraction-rule.md b/docs/manage/field-extractions/create-field-extraction-rule.md index 69e768df89..e66110e9bf 100644 --- a/docs/manage/field-extractions/create-field-extraction-rule.md +++ b/docs/manage/field-extractions/create-field-extraction-rule.md @@ -61,17 +61,17 @@ To create a Field Extraction Rule: * Rule limit - none * Time - During a search when using **Auto Parse Mode** from [Dynamic Parsing](../../search/get-started-with-search/build-search/dynamic-parsing.md). * **Scope**. Select either **All Data** or **Specific Data**. When specifying data the options for the scope differ depending on when the rule is applied. - * For an **Ingest Time** rule, type a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that points to the subset of logs you'd like to parse. Think of the Scope as the first portion of an ad hoc search, before the first pipe (`|`). You'll use the Scope to run a search against the rule. Custom metadata fields are not supported here, they have not been indexed to your data yet at this point in collection. - * For a **Run Time** rule, define the scope of your JSON data. You can define your JSON data source as a [Partition](/docs/manage/partitions) Name(index), sourceCategory, Host Name, Collector Name, or any other [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) that describes your JSON data. Think of the Scope as the first portion of an ad hoc search, before the first pipe (`|`). You'll use the Scope to run a search against the rule. You cannot use keywords like “info” or “error” in your scope. + * For an **Ingest Time** rule, type a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that points to the subset of logs you'd like to parse. Think of the scope as the first portion of an ad hoc search, before the first pipe (`|`). You'll use the scope to run a search against the rule. Custom metadata fields are not supported here, they have not been indexed to your data yet at this point in collection. + * For a **Run Time** rule, define the scope of your JSON data. You can define your JSON data source as a [partition](/docs/manage/partitions) Name(index), sourceCategory, Host Name, Collector Name, or any other [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) that describes your JSON data. Think of the scope as the first portion of an ad hoc search, before the first pipe (`|`). You'll use the scope to run a search against the rule. You cannot use keywords like “info” or “error” in your scope. :::note - Always set up JSON auto extraction (Run Time field extraction) on a specific Partition name (recommended) or a particular Source. Failing to do so might cause the auto parsing logic to run on data sources where it is not applicable and will add additional overhead that might deteriorate the performance of your queries. + Always set up JSON auto extraction (Run Time field extraction) on a specific partition name (recommended) or a particular Source. Failing to do so might cause the auto parsing logic to run on data sources where it is not applicable and will add additional overhead that might deteriorate the performance of your queries. ::: :::sumo Best Practices - If you are not using Partitions we recommend using [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) fields like `_sourceCategory`, `_sourceHost` or `_collector` to define the scope. + If you are not using partitions we recommend using [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) fields like `_sourceCategory`, `_sourceHost` or `_collector` to define the scope. - We recommend creating a separate Partition for your JSON dataset and use that Partition as the scope for run time field extraction. For example, let's say you have AWS CloudTrail logs, and they are stored in `_view=cloudtrail` Partition in Sumo. You can create a Run Time FER with the scope `_view=cloudtrail`. Creating a separate Partition and using it as scope for a run time field extraction ensures that auto parsing logic only applies to necessary Partitions. + We recommend creating a separate partition for your JSON dataset and use that partition as the scope for run time field extraction. For example, let's say you have AWS CloudTrail logs, and they are stored in `_view=cloudtrail` partition in Sumo. You can create a Run Time FER with the scope `_view=cloudtrail`. Creating a separate partition and using it as scope for a run time field extraction ensures that auto parsing logic only applies to necessary partitions. ::: * **Parsed template** (Optional for Ingest Time rules). @@ -111,7 +111,7 @@ parse "user=\"*\" action=\"*\" sessionId=\"*\"" as user, action, sessionid | action | Action performed by the user | Delete | | sessionId | Session ID for user action | 145623 | -## Best practices for designing Rules +## Best practices for designing rules **Include the most accurate keywords to identify the subset of data from which you want to extract data.** Lock down the scope as tightly as possible to make sure it's extracting just the data you want, nothing more. Using a broader scope means that Sumo Logic will inspect more data for the fields you'd like to parse, which may mean that fields are extracted when you do not actually need them. @@ -123,7 +123,7 @@ parse "user=\"*\" action=\"*\" sessionId=\"*\"" as user, action, sessionid **Test the scope before creating the rule.** Make sure that you can extract fields from all messages you need to be returned in search results. Test them by running a potential rule as a search. -**Make sure all fields appear in the Scope you define.** When Field Extraction is applied to data, all fields must be present to have any fields indexed; even if one field isn't found in a message, that message is dropped from the results. In other words, it's all or nothing. For multiple sets of fields that are somewhat independent, make two rules. +**Make sure all fields appear in the scope you define.** When Field Extraction is applied to data, all fields must be present to have any fields indexed; even if one field isn't found in a message, that message is dropped from the results. In other words, it's all or nothing. For multiple sets of fields that are somewhat independent, make two rules. **Reuse field names in multiple FERs if scope is distinct and separate and not matching same messages.** To save space and allow for more FERs within your 200 field limit, you can reuse the field names as long as they are used in non-overlapping FERs.  diff --git a/docs/manage/field-extractions/edit-field-extraction-rules.md b/docs/manage/field-extractions/edit-field-extraction-rules.md index 476f7f7b57..94e7a2cdf6 100644 --- a/docs/manage/field-extractions/edit-field-extraction-rules.md +++ b/docs/manage/field-extractions/edit-field-extraction-rules.md @@ -1,15 +1,15 @@ --- id: edit-field-extraction-rules title: Edit Field Extraction Rules -description: You can change Field Extraction Rules. +description: You can change field extraction rules. --- :::important You need the **Manage field extraction rules** [role capability](../users-roles/roles/role-capabilities.md) to edit a field extraction rule.  ::: -Changes to Field Extraction Rules are implemented immediately. Additionally, you can save a copy of a rule and make edits to the new version of the rule without making any changes to the original rule. +Changes to field extraction rules are implemented immediately. Additionally, you can save a copy of a rule and make edits to the new version of the rule without making any changes to the original rule. -1. [**New UI**](/docs/get-started/sumo-logic-ui). To access the Field Extraction Rules page, in the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. +1. [**New UI**](/docs/get-started/sumo-logic-ui). To access the field extraction rules page, in the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Field Extraction Rules**. You can also click the **Go To...** menu at the top of the screen and select **Field Extraction Rules**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Field Extraction Rules**. 1. Find the rule in the table and click it. A window appears on the right of the table, click the **Edit** button. 1. Make changes as needed and click **Save** when done.  diff --git a/docs/manage/field-extractions/fer-templates/index.md b/docs/manage/field-extractions/fer-templates/index.md index 5be207b3fa..f15a331238 100644 --- a/docs/manage/field-extractions/fer-templates/index.md +++ b/docs/manage/field-extractions/fer-templates/index.md @@ -5,7 +5,7 @@ description: Instead of creating a parse expression, you can select a Template f --- import useBaseUrl from '@docusaurus/useBaseUrl'; -FER Templates are provided for common applications such as Apache Access, Akamai Cloud Monitor, AWS ELB, and Microsoft IIS logs. Rather than creating a parse expression from scratch, you can select a Template from the list, preview it, and then click to apply it. +FER templates are provided for common applications such as Apache Access, Akamai Cloud Monitor, AWS ELB, and Microsoft IIS logs. Rather than creating a parse expression from scratch, you can select a template from the list, preview it, and then click to apply it. FER Templates diff --git a/docs/manage/field-extractions/field-naming-convention.md b/docs/manage/field-extractions/field-naming-convention.md index 99887b0620..e911ef0715 100644 --- a/docs/manage/field-extractions/field-naming-convention.md +++ b/docs/manage/field-extractions/field-naming-convention.md @@ -5,26 +5,26 @@ description: Sumo Logic recommends using the following naming convention for sta --- -Sumo Logic recommends using the following naming convention for standard fields. This best practice creates standardization across your deployment for use with Field Extraction Rules (FER), Searches and Dashboards, makes it easier for users to recognize fields by their names, and can even improve search performance. +Sumo Logic recommends using the following naming convention for standard fields. This best practice creates standardization across your deployment for use with Field Extraction Rules (FER), searches and dashboards, makes it easier for users to recognize fields by their names, and can even improve search performance. -For example, if you create your own FER for Source IP, and at some point you want to count by Source IPs across multiple Sources, you can easily do so because you've used the same name for the field across all Sources. In your query, simply use: +For example, if you create your own FER for source IP, and at some point you want to count by source IPs across multiple sources, you can easily do so because you've used the same name for the field across all sources. In your query, simply use: ```sql | count by src_ip ``` -Another benefit of using the standard field naming convention is that [Sumo Logic Apps](/docs/get-started/apps-integrations) are created using this naming convention. So if you use it too, your queries will match those of the Sumo Logic Apps’ pre-configured searches and Dashboards. +Another benefit of using the standard field naming convention is that [Sumo Logic apps](/docs/get-started/apps-integrations) are created using this naming convention. So if you use it too, your queries will match those of the Sumo Logic apps’ pre-configured searches and dashboards. If you cannot use all the naming conventions for standard fields, we recommend that you at least use the field name conventions for the following: * Source Hosts * Destination Hosts * IP address -* user +* User -## Source Information +## Source information -| Field Name | Description | +| Field name | Description | |:--|:--| | src_host | Source Host (name or IP) | | src_interface | Source Interface | diff --git a/docs/manage/field-extractions/index.md b/docs/manage/field-extractions/index.md index 9bf5a08c4a..71fe250775 100644 --- a/docs/manage/field-extractions/index.md +++ b/docs/manage/field-extractions/index.md @@ -33,7 +33,7 @@ The Field Extraction Rules page displays the following information:  When hovering over a row in the table there are icons that appear on the far right for editing, disabling and deleting the rule. -* **Status** shows a checkmark in a green circle Check in green circle to indicate if the Rule is actively being applied or an exclamation mark in a red circle Exclamation in red circl to indicate if the Rule is disabled. +* **Status** shows a checkmark in a green circle Check in green circle to indicate if the rule is actively being applied or an exclamation mark in a red circle Exclamation in red circl to indicate if the rule is disabled. * **Rule Name** * **Applied At** indicates when the field extraction process occurs, either at Ingest or Run time. * **Scope**  diff --git a/docs/manage/fields.md b/docs/manage/fields.md index effa6fe6ca..51d27fb275 100644 --- a/docs/manage/fields.md +++ b/docs/manage/fields.md @@ -7,9 +7,9 @@ description: Manage fields in Sumo Logic to control how log data is parsed and o import useBaseUrl from '@docusaurus/useBaseUrl'; -Fields allow you to reference log data based on meaningful associations. They act as metadata tags that are assigned to your logs so you can search with them. Each field contains a key-value pair, where the field name is the key. Fields may be referred to as Log Metadata Fields. +Fields allow you to reference log data based on meaningful associations. They act as metadata tags that are assigned to your logs so you can search with them. Each field contains a key-value pair, where the field name is the key. Fields may be referred to as *log metadata fields*. -In addition to defining fields through [Field Extraction Rules](/docs/manage/field-extractions), you can define fields on data sent to Sumo by manually defining them on Sources and Collectors, as well as dynamically through HTTP headers and tags from Amazon EC2. +In addition to defining fields through [Field Extraction Rules](/docs/manage/field-extractions), you can define fields on data sent to Sumo Logic by manually defining them on sources and collectors, as well as dynamically through HTTP headers and tags from Amazon EC2. The order of precedence for field assignment from highest to lowest is: @@ -20,13 +20,13 @@ The order of precedence for field assignment from highest to lowest is: 1. Source 1. Collector -So, if you have a field defined at the Collector or Source level, and you create a FER against the same source of data with the same field name, the FER will win the field assignment. +So, if you have a field defined at the collector or source level, and you create a FER against the same source of data with the same field name, the FER will win the field assignment. -Any fields you want assigned to log data need to exist in a Fields schema. Each account has its own Fields schema that is available to manage in the Sumo web interface. When a field is defined and enabled in the Fields schema it is assigned to the appropriate log data as configured. If a field is sent to Sumo Logic but isn’t present or enabled in the schema, it’s ignored and marked as **Dropped**. +Any fields you want assigned to log data need to exist in a fields schema. Each account has its own fields schema that is available to manage in the Sumo Logic web interface. When a field is defined and enabled in the fields schema it is assigned to the appropriate log data as configured. If a field is sent to Sumo Logic but isn’t present or enabled in the schema, it’s ignored and marked as **Dropped**. -Fields specified in field extraction rules are automatically added and enabled in your Fields schema. +Fields specified in field extraction rules are automatically added and enabled in your fields schema. -Field management is important to ensure search performance is maintained and you continue to have meaningful fields assigned to your data. You can manage fields defined through any of these methods at any time, to include deleting unneeded fields, see [manage fields](#manage-fields) for details. +Field management is important to ensure search performance is maintained and you continue to have meaningful fields assigned to your data. You can manage fields defined through any of these methods at any time, to include deleting unneeded fields, see [Manage fields](#manage-fields) for details. import TerraformLink from '../reuse/terraform-link.md'; @@ -38,11 +38,11 @@ You can use Terraform to provide a field with the [`sumologic_field`](https://re ## About metrics sources, fields, and metadata -Sumo Logic metrics sources also support tagging with fields defined in your Fields schema or other metadata that hasn’t been added to your schema. Here’s how it works: +Sumo Logic metrics sources also support tagging with fields defined in your fields schema or other metadata that hasn’t been added to your schema. -When creating or updating the configuration of an HTTP Source or a Collector that has an HTTP source, you assign it a field on the configuration page. If the field doesn’t exist in the schema, you are prompted whether or not you want to **Automatically activate all fields on save**. If you select that option, the field will be added to the schema and be applied to the logs collected by the Collector, and to metrics and logs collected by the HTTP Source. If you do not select **Automatically activate all fields on save**, the field will not be saved to your Fields schema, and the field will be applied only to the metrics collected by the HTTP Source. +When creating or updating the configuration of an HTTP source or a collector that has an HTTP source, you assign it a field on the configuration page. If the field doesn’t exist in the schema, you are prompted whether or not you want to **Automatically activate all fields on save**. If you select that option, the field will be added to the schema and be applied to the logs collected by the collector, and to metrics and logs collected by the HTTP source. If you do not select **Automatically activate all fields on save**, the field will not be saved to your fields schema, and the field will be applied only to the metrics collected by the HTTP source. -When creating or updating the configuration of a Streaming Metrics Source, a Host Metrics Source, or a Docker Source, you can assign it metadata on the source configuration page. Metadata fields you assign in this fashion to these metrics sources do not need to exist in your Fields schema and will not be added to the schema. +When creating or updating the configuration of a Streaming Metrics source, a Host Metrics source, or a Docker source, you can assign it metadata on the source configuration page. Metadata fields you assign in this fashion to these metrics sources do not need to exist in your fields schema and will not be added to the schema. ## Limitations @@ -52,55 +52,55 @@ When creating or updating the configuration of a Streaming Metrics Source, a Hos ::: fields-capacity * It can take up to 10 minutes for fields to start being assigned to your data. -* A Collector can have up to 10 fields. -* A Source can have up to 10 fields. +* A collector can have up to 10 fields. +* A source can have up to 10 fields. * An HTTP request is limited to 30 fields. * A field name (key) is limited to a maximum length of 255 characters. * A value is limited to a maximum length of 200 characters. * Fields cannot be used with [Live Tail](/docs/search/live-tail). -## Collector and Source fields +## Collector and source fields -Fields can be assigned to a Collector and Source using the **Fields** input table in the Sumo user interface when creating or editing a Collector or Source. +Fields can be assigned to a collector and source using the **Fields** input table in the Sumo Logic user interface when creating or editing a collector or source. 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Collection**. You can also click the **Go To...** menu at the top of the screen and select **Collection**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Collection**. -1. Create or find and select the Collector or Source you want to assign fields to. +1. Create or find and select the collector or source you want to assign fields to. 1. Click the **+Add Field** link in the **Fields** section. Define the fields you want to associate, each field needs a name (key) and value. - * green check circle.png A green circle with a check mark is shown when the field exists and is enabled in the Fields table schema. - * orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist in the Fields table schema. In this case, you'll see an option to automatically add or enable the nonexistent fields to the Fields table schema. If a field is sent to Sumo Logic but isn’t present or enabled in the schema, it’s ignored and marked as **Dropped**. + * green check circle.png A green circle with a check mark is shown when the field exists and is enabled in the fields table schema. + * orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist in the fields table schema. In this case, you'll see an option to automatically add or enable the nonexistent fields to the fields table schema. If a field is sent to Sumo Logic but isn’t present or enabled in the schema, it’s ignored and marked as **Dropped**. 1. **Automatically activate all fields on save**.  If you click **Automatically activate all fields on save**: - * The field will be saved to your Fields schema. - * The field will be applied to logs collected by the Collector or Source. - * If you are adding the field to an HTTP Source, or to a Collector that has an HTTP Source, the field will be applied to the metrics collected by the source. + * The field will be saved to your fields schema. + * The field will be applied to logs collected by the collector or source. + * If you are adding the field to an HTTP source, or to a collector that has an HTTP source, the field will be applied to the metrics collected by the source. If you do not click **Automatically activate all fields on save**: - * The field will be *not* be saved to your Fields schema - * The field will be applied to logs collected by the Collector or Source, but because the field won’t be added to your Fields schema, it will be dropped by Sumo Logic when logs with that field are ingested. - * If you are adding the field to an HTTP Source, or to a Collector that has an HTTP Source, the field will be applied to the metrics collected by the source. + * The field will be *not* be saved to your fields schema + * The field will be applied to logs collected by the collector or source, but because the field won’t be added to your fields schema, it will be dropped by Sumo Logic when logs with that field are ingested. + * If you are adding the field to an HTTP source, or to a collector that has an HTTP source, the field will be applied to the metrics collected by the source. 1. Click **Save**. Edit collector fields name -In the above example, we have created a new field called `cluster` and set the value to `k8s.dev`. With this configuration, any logs sent to this Collector will now have this key-value pair associated with it. +In the above example, we have created a new field called `cluster` and set the value to `k8s.dev`. With this configuration, any logs sent to this collector will now have this key-value pair associated with it. With this association, you can search for `cluster=k8s.dev` to return your logs.
Collector field search results -### Using Collector API +### Using collector API -Use the `fields` parameter with the [Collector API](/docs/api/collector-management) to define fields on a Collector or Source. +Use the `fields` parameter with the [Collector API](/docs/api/collector-management) to define fields on a collector or source. | Parameter | Type | Required? | Description | Access | |:--|:--|:--|:--|:--| -| fields | JSON Object | No | JSON map of key-value fields (metadata) to apply to the Collector or Source. | Modifiable | +| fields | JSON Object | No | JSON map of key-value fields (metadata) to apply to the collector or source. | Modifiable | -The following JSON is an example configuration of a Hosted Collector with the fields parameter: +The following JSON is an example configuration of a Hosted collector with the fields parameter: ```json { @@ -116,15 +116,15 @@ The following JSON is an example configuration of a Hosted Collector with the fi } ``` -### Using Local Configuration +### Using local configuration -Installed Collectors can use JSON files to configure their Sources when using [Local Configuration File Management](/docs/send-data/use-json-configure-sources/local-configuration-file-management). Use the `fields` parameter in your JSON configuration to define fields on a Source. +Installed collectors can use JSON files to configure their sources when using [local configuration file management](/docs/send-data/use-json-configure-sources/local-configuration-file-management). Use the `fields` parameter in your JSON configuration to define fields on a source. | Parameter | Type | Required? | Description | Access | |:--|:--|:--|:--|:--| -| fields | JSON Object | No | JSON map of key-value fields (metadata) to apply to the Collector or Source. | Modifiable | +| fields | JSON Object | No | JSON map of key-value fields (metadata) to apply to the collector or source. | Modifiable | -The following JSON is an example configuration of a Local File Source with the fields parameter: +The following JSON is an example configuration of a Local File source with the fields parameter: ```json { @@ -153,19 +153,19 @@ The following JSON is an example configuration of a Local File Source with the f } ``` -### HTTP Source fields +### HTTP source fields -When uploading log data with HTTP Sources you can pass fields in two +When uploading log data with HTTP sources you can pass fields in two ways, * with the [X-Sumo-Fields HTTP header](#x-sumo-fields-http-header). -* enabling [Extended HTTP Metadata Collection](#extended-http-metadata-collection) on your Source. +* enabling [Extended HTTP Metadata Collection](#extended-http-metadata-collection) on your source. You can use both methods together. If there is a name collision between a given header and a value passed in X-Sumo-Fields, X-Sumo-Fields takes precedence. -Any fields passed with your data need to exist in your Fields schema defined in Sumo. Any fields not defined in Sumo that are passed through a header are dropped. See how to define fields in the [manage fields](#manage-fields) section. +Any fields passed with your data need to exist in your fields schema defined in Sumo Logic. Any fields not defined in Sumo Logic that are passed through a header are dropped. See how to define fields in the [Manage fields](#manage-fields) section. -See [how to upload logs to an HTTP Source](/docs/send-data/hosted-collectors/http-source/logs-metrics). +See [how to upload logs to an HTTP source](/docs/send-data/hosted-collectors/http-source/logs-metrics). #### X-Sumo-Fields HTTP header @@ -175,13 +175,13 @@ Your fields need to be in a comma separated list of key-value pairs. For example curl -v -X POST -H 'X-Sumo-Fields:environment=dev,cluster=k8s' -T /file.txt ``` -#### Extended HTTP Metadata Collection +#### Extended HTTP metadata collection -When creating or editing your HTTP Source that will receive log data add the field `_convertHeadersToFields` with a value of `true`. This field needs to be added to your Fields schema to work. +When creating or editing your HTTP source that will receive log data add the field `_convertHeadersToFields` with a value of `true`. This field needs to be added to your fields schema to work. Convert headers to fields -With this field set on your Source, headers are processed as metadata fields. For example, a cURL command posting data with custom fields would look like: +With this field set on your source, headers are processed as metadata fields. For example, a cURL command posting data with custom fields would look like: ```bash curl -v -X POST -H 'environment: dev' -H 'cluster: k8s' -T /file.txt @@ -193,9 +193,9 @@ The following headers are reserved and can not be used: X-Sumo-Category, X-Sum ### Tags from EC2 -Create a Sumo Logic [AWS Metadata Source](/docs/send-data/hosted-collectors/amazon-aws/aws-metadata-tag-source.md) to collect custom tags from EC2 instances running on AWS. An Installed Collector automatically pulls [AWS instance identity documents](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html) (IMDSv2) from instances to get their accountID, availabilityZone, instanceId, instanceType, and region. +Create a Sumo Logic [AWS Metadata source](/docs/send-data/hosted-collectors/amazon-aws/aws-metadata-tag-source.md) to collect custom tags from EC2 instances running on AWS. An Installed collector automatically pulls [AWS instance identity documents](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html) (IMDSv2) from instances to get their accountID, availabilityZone, instanceId, instanceType, and region. -Logs ingested by Installed Collectors on EC2 instances will be tagged as long as the tag, including instance information tags, exists in the organization's Fields schema. See how to define fields in the [manage fields](#manage-fields) section. EC2 resource tags take precedence over EC2 instance information. Only one AWS Metadata Source is required to collect tags from multiple hosts. +Logs ingested by Installed collectors on EC2 instances will be tagged as long as the tag, including instance information tags, exists in the organization's fields schema. See how to define fields in the [Manage fields](#manage-fields) section. EC2 resource tags take precedence over EC2 instance information. Only one AWS Metadata source is required to collect tags from multiple hosts. Tags are returned in your search results and can be referenced in queries. For information about assigning tags to EC2 instances, see [Tagging Your Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) in AWS help.  @@ -203,9 +203,9 @@ Tags are returned in your search results and can be referenced in queries. For i Fields can be used in the following ways: -* Log [Search page](/docs/search). Use the key-value pair as a keyword search expression (before the first pipe, \| ). +* [Log Search ](/docs/search). Use the key-value pair as a keyword search expression (before the first pipe, \| ). * [Role Based Access Control](/docs/manage/users-roles/roles) (RBAC). Fields can be used in role search filters to control access to data. -* [Partitions](/docs/manage/partitions), [Scheduled Views](/docs/manage/scheduled-views), and [Field Extraction Rules](/docs/manage/field-extractions). Fields can be used in the scope of Partitions, Scheduled Views, and Field Extraction Rules. +* [Partitions](/docs/manage/partitions), [Scheduled Views](/docs/manage/scheduled-views), and [Field Extraction Rules](/docs/manage/field-extractions). Fields can be used in the scope of partitions, Scheduled Views, and Field Extraction Rules. :::note Fields cannot be used with [Live Tail](/docs/search/live-tail). @@ -237,21 +237,18 @@ The Fields page displays the following information:  * **Field Name** is the name of the field, known as the key in the key-value pair. * **Data Type** shows the data type of the field. * **Field Extraction Rules** shows the number of Field Extraction Rules that reference the field. -* **Role Based Access** **Control** shows the number of Roles using a search filter that references the field. -* **Partitions** shows the number of Partitions that reference the field. -* **Collectors** shows the number of Collectors that reference the field. (Available when viewing custom fields.) -* **Sources** shows the number of Sources that reference the field. (Available when viewing custom fields.) +* **Role Based Access** **Control** shows the number of roles using a search filter that references the field. +* **Partitions** shows the number of partitions that reference the field. +* **Collectors** shows the number of collectors that reference the field. (Available when viewing custom fields.) +* **Sources** shows the number of sources that reference the field. (Available when viewing custom fields.) * **Fields Capacity** (bottom of table) shows how many fields your account is using, out of the total available for use. On the Fields page you can: - * Click **+ Add** to add fields. -* Search fields * The dropdown next to the add button lets you toggle between the following: - - * **Existing -** **Built-in Fields**. These are [metadata fields created by Sumo Logic](../search/get-started-with-search/search-basics/built-in-metadata.md) and cannot be modified. - * **Existing - Custom Fields**. These fields were either created by FERs or users. - * **Dropped Fields**. These fields are being dropped due to not existing in the fields table. - +* Search fields. The dropdown next to the add button lets you toggle between the following: + * **Existing -** **Built-in Fields**. These are [metadata fields created by Sumo Logic](../search/get-started-with-search/search-basics/built-in-metadata.md) and cannot be modified. + * **Existing - Custom Fields**. These fields were either created by FERs or users. + * **Dropped Fields**. These fields are being dropped due to not existing in the fields table. * Disable fields * Delete fields  @@ -263,7 +260,7 @@ For the fields listed, select a row to view its details. A details pane appears #### Add field -Adding a field will define it in the Fields schema allowing it to be assigned as metadata to your logs. +Adding a field will define it in the fields schema allowing it to be assigned as metadata to your logs. 1. Click the **+ Add** button on the top right of the table. A panel named **Add Field** appears to the right of the fields table. 1. Input a field name and click **Save**. diff --git a/docs/manage/health-events.md b/docs/manage/health-events.md index cc58b12bd4..4094cf7f5a 100644 --- a/docs/manage/health-events.md +++ b/docs/manage/health-events.md @@ -74,7 +74,7 @@ Each health event log has common keys that categorize it to a product area and p | subsystem | The product area of the event. | String | | resourceIdentity | This includes any unique identifiers, names, and the type of the object associated with the event. | JSON object of Strings | -## Configure Scheduled Search +## Configure scheduled search Configuring the scheduled search for the selected health event will help you with timely alerts to all the recipients when the health event is triggered every time. To configure, follow the below steps: @@ -101,7 +101,7 @@ _index=sumologic_system_events "0000000007063B25" | where eventType = "Health-Change" AND resourceId = "0000000007063B25" AND eventName="LookupsLimitApproaching" ``` -## View Health Events +## View health events The health events table allows you to easily view and investigate problems which occur while injecting the data to Sumo Logic. On the health events table, you can search, filter, and sort incidents by key aspects like severity, resource name, event name, resource type, and opened since date. @@ -111,7 +111,7 @@ It may take up to 15 minutes for a 90% usage breach for Lookup Tables, Partition 1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). Go to **Manage Data > Monitoring > Health Events**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Health Events**.
health-events-table 1. Click on the required row to view the details of a health event.
health-events-detial - - **Create Scheduled Search**. Click this button to get alerts for specific health events. The unique identifier of the resource type is used in the query. See [Schedule a Search](../alerts/scheduled-searches/schedule-search.md) for details. + - **Create Scheduled Search**. Click this button to get alerts for specific health events. The unique identifier of the resource type is used in the query. See [Create a Scheduled Search](../alerts/scheduled-searches/schedule-search.md) for details. - Under the **More Actions** menu you can select: * **Event History** to run a search against the **sumologic_system_events** partition to view all of the related event logs. * **View Object** to view the resource in detail related to the event. @@ -131,7 +131,7 @@ It may take up to 15 minutes for a 90% usage breach for Lookup Tables, Partition - **Error Info**. Detailed information about the event. This may include error context and suggested corrective actions. - **Minutes Since Last Heartbeat**. The number of minutes that have elapsed since the system last received a heartbeat signal from the resource. A higher number may indicate the resource is offline or unresponsive. This field is only available for *Collector* resource type. -## View Health Events in Collection page +## View health events in collection page A **Health** column on the Collection page shows color-coded healthy, error, and warning states for Collectors and Sources to quickly determine the health of your Collectors and Sources.
Collection-health-column diff --git a/docs/manage/index.md b/docs/manage/index.md index 6c422b449c..7a77e21206 100644 --- a/docs/manage/index.md +++ b/docs/manage/index.md @@ -1,7 +1,7 @@ --- slug: /manage title: Manage Account -description: Manage user accounts, Collectors and Sources, security, SEO, and other administrative details. +description: Manage user accounts, collectors and sources, security, SEO, and other administrative details. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -9,7 +9,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; icon :::tip -See [Collection](/docs/send-data/collection) to learn about Collectors, Sources, and processing rules. +See [Collection](/docs/send-data/collection) to learn about collectors, sources, and processing rules. ::: This topic describes features and options that give you administration over accounts, roles, collectors, content sharing, field extraction rules, and much more. @@ -30,7 +30,7 @@ This topic describes features and options that give you administration over acco
icon

Partitions

-

Accelerate the search process by allowing Admins to filter a subset of the log messages in an index.

+

Accelerate the search process by allowing admins to filter a subset of the log messages in an index.

@@ -42,7 +42,7 @@ This topic describes features and options that give you administration over acco
icon

Health Events

-

Monitor the health of your Collectors and Sources.

+

Monitor the health of your collectors and sources.

diff --git a/docs/manage/ingestion-volume/collection-status-page.md b/docs/manage/ingestion-volume/collection-status-page.md index 970552ca0c..b2c8296670 100644 --- a/docs/manage/ingestion-volume/collection-status-page.md +++ b/docs/manage/ingestion-volume/collection-status-page.md @@ -1,27 +1,27 @@ --- id: collection-status-page title: Collection Status Page -description: Provides a visual snapshot of the message history for your deployment, and a message volume histogram for each Collector. +description: Provides a visual snapshot of the message history for your deployment, and a message volume histogram for each collector. --- import useBaseUrl from '@docusaurus/useBaseUrl'; -The Status page provides a message volume history for your account, as well as a message volume histogram for each Collector, giving you immediate visual feedback about traffic spikes or collection issues. To see statistics for any bar in the histogram, hover your mouse pointer over the area of interest. +The Status page provides a message volume history for your account, as well as a message volume histogram for each collector, giving you immediate visual feedback about traffic spikes or collection issues. To see statistics for any bar in the histogram, hover your mouse pointer over the area of interest. -When you first install a Collector it is common to configure Sources to collect some historical data, rather than from the moment of installation. In this case, the status page shows a spike in message volume and then levels out as collection reaches a steady state. For example, a local log file can contain millions of log messages. When the Collector is initialized, it quickly gathers all those logs and sends them to Sumo Logic resulting in a traffic spike. After the initial collection, the Collector continues to tail the file, reading from the end of the file as new entries are created, and sends a smaller number of new log messages. +When you first install a collector it is common to configure sources to collect some historical data, rather than from the moment of installation. In this case, the status page shows a spike in message volume and then levels out as collection reaches a steady state. For example, a local log file can contain millions of log messages. When the collector is initialized, it quickly gathers all those logs and sends them to Sumo Logic resulting in a traffic spike. After the initial collection, the collector continues to tail the file, reading from the end of the file as new entries are created, and sends a smaller number of new log messages. Status tab -* **A.** Select to show all, running, or stopped Collectors. -* **B.** Select how many columns of Collectors are displayed. +* **A.** Select to show all, running, or stopped collectors. +* **B.** Select how many columns of collectors are displayed. * **C.** Select the time range of data volume to view. [**New UI**](/docs/get-started/sumo-logic-ui). To view the status page, In the Sumo Logic main menu select **Data Management**, and then under **Data Collection** select **Status**. You can also click the **Go To...** menu at the top of the screen and select **Status**. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). To view the status page, in the main Sumo Logic menu select **Manage Data > Collection > Status**. -## Change the scale or timeframe for a Collector +## Change the scale or timeframe for a collector -For each Collector, you can change the message volume scale so that variations in volume are easier to see. You can also change the time range for each Collector to investigate the stream volume for a single Collector. When a Collector x or y axis is not aligned with all others, the background color changes to blue. +For each collector, you can change the message volume scale so that variations in volume are easier to see. You can also change the time range for each collector to investigate the stream volume for a single collector. When a collector x or y axis is not aligned with all others, the background color changes to blue. -To return to an aligned scale across all Collectors, in the total message volume area, click the link to **Align all views below**. To return an individual view to the same scale as other Collectors, select the **Same scale across view** check box. +To return to an aligned scale across all collectors, in the total message volume area, click the link to **Align all views below**. To return an individual view to the same scale as other collectors, select the **Same scale across view** check box. diff --git a/docs/manage/ingestion-volume/data-volume-index/index.md b/docs/manage/ingestion-volume/data-volume-index/index.md index 3f3f2358ff..15b0d5eb1d 100644 --- a/docs/manage/ingestion-volume/data-volume-index/index.md +++ b/docs/manage/ingestion-volume/data-volume-index/index.md @@ -11,7 +11,7 @@ The Data Volume Index gives you visibility into how much data you are sending to The Data Volume Index provides data for logs and metrics: * **Logs and Tracing.** Ingest volume in bytes and the number of log messages. Tracing ingest volume in billed bytes and spans count. See [Log and Tracing Data Volume Index](log-tracing-data-volume-index.md) for details. -* **Metrics.** Ingest volume measured in data points. See [Metrics Ingest Data Volume Index](metrics-data-volume-index.md) for details. +* **Metrics.** Ingest volume measured in data points. See [Metrics Data Volume Index](metrics-data-volume-index.md) for details. If you are a user of Credits package accounts, the Data Volume Index should be manually enabled by an administrator by toggling the **Enable Granular Data Tracking** button. The index then begins populating. A set of messages within the index is created every five minutes. The data does not backfill and is provided to the index only when the option is enabled. @@ -28,14 +28,14 @@ _index=sumologic_volume ``` :::important -Creating an Index typically adds a nominal amount of data to your overall volume (approximately one to two percent) when pre-aggregated. Depending on your Sumo Logic account type and subscription, this data will count against your data volume quota. +Creating an index typically adds a nominal amount of data to your overall volume (approximately one to two percent) when pre-aggregated. Depending on your Sumo Logic account type and subscription, this data will count against your data volume quota. ::: -## Granular Data Tracking +## Granular data tracking -Granular Data Tracking is a part of usage management that allows you to proactively manage your systems’ behavior and to fine tune your data ingest with respect to the data plan for your Sumo Logic subscription. This should be manually enabled by an administrator if you are a user of Credits package accounts and this will be enabled by default for Flex package accounts. A set of messages within the index is created every five minutes. The data does not backfill and is provided to the index only when the option is enabled. +Granular data tracking is a part of usage management that allows you to proactively manage your systems’ behavior and to fine tune your data ingest with respect to the data plan for your Sumo Logic subscription. This should be manually enabled by an administrator if you are a user of Credits package accounts and this will be enabled by default for Flex package accounts. A set of messages within the index is created every five minutes. The data does not backfill and is provided to the index only when the option is enabled. -### Disable Granular Data Tracking +### Disable granular data tracking 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Administration**, and then under **Account** select **Account Overview**. You can also click the **Go To...** menu at the top of the screen and select **Account Overview**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Administration > Account**. 1. Click the gear icon gear-icon-accounts-page in the top left panel of the **Account Overview** page. diff --git a/docs/manage/ingestion-volume/data-volume-index/log-tracing-data-volume-index.md b/docs/manage/ingestion-volume/data-volume-index/log-tracing-data-volume-index.md index 2ed4014f7d..1c4a681aab 100644 --- a/docs/manage/ingestion-volume/data-volume-index/log-tracing-data-volume-index.md +++ b/docs/manage/ingestion-volume/data-volume-index/log-tracing-data-volume-index.md @@ -6,7 +6,7 @@ description: The Data Volume Index is populated with a set of log messages that import useBaseUrl from '@docusaurus/useBaseUrl'; -The data volume index is populated with a set of log messages every five minutes. The messages contain information on how much data (by bytes and messages count) your account is ingesting. Your data volume is calculated based on when your logs were received, in Sumo this timestamp is stored with the `_receiptTime` [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) field. Each log message includes information based on one of the following index source categories. +The Data Volume Index is populated with a set of log messages every five minutes. The messages contain information on how much data (by bytes and messages count) your account is ingesting. Your data volume is calculated based on when your logs were received. In Sumo Logic this timestamp is stored with the `_receiptTime` [metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) field. Each log message includes information based on one of the following index source categories. | Index Log Type | Index Source Category | |:--------------------|:--------------------------------| @@ -24,15 +24,15 @@ The data volume index is populated with a set of log messages every five minutes | View | `view_volume` | | SourceCategory | `view_and_extractedAndCollectedFieldSize_volume` | -You can query the data volume index just like any other message using the Sumo Logic search page. To see the data created within the data volume index, when you search, specify the `_index` metadata field with a value of `sumologic_volume`. For more information, see [Search Metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata). +You can query the data volume index just like any other message using the Sumo Logic search page. To see the data created within the data volume index, when you search, specify the `_index` metadata field with a value of `sumologic_volume`. For more information, see [Built-in Metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata). -## Sumo Logic App for Data Volume +## Sumo Logic app for data volume -Sumo Logic provides an application that utilizes the data volume index to see your account's volume usage as a glance. For details, see [Data Volume app](/docs/integrations/sumo-apps/data-volume). +Sumo Logic provides an application that utilizes the data volume index to see your account's volume usage as a glance. For details, see [Sumo Logic Data Volume App](/docs/integrations/sumo-apps/data-volume). -## Known Issue +## Known issue -There is a known issue when searching against `_sourceCategory` values where scheduled views show up blank. This causes results to be returned with numbers as the _sourceCategory values. +There is a known issue when searching against `_sourceCategory` values where scheduled views show up blank. This causes results to be returned with numbers as the `_sourceCategory` values. For example, you would see: @@ -41,7 +41,7 @@ For example, you would see: "count":353325 ``` -In this case, the _sourceCategory returns `2862`, which is the actual size of the default index from the scheduled view. +In this case, the `_sourceCategory` returns `2862`, which is the actual size of the default index from the scheduled view. ## Query the Data Volume Index @@ -64,7 +64,7 @@ _index=sumologic_volume AND _sourceCategory=collector_and_tier_volume If the data volume index is not enabled, a search will not produce any results. ::: -## Data Volume Index Message Format +## Data Volume Index message format The data volume index messages are JSON formatted messages that contain parent objects for each source data point, and child objects that detail the message size and count for each parent. @@ -84,7 +84,7 @@ For example, a single message for the "Collector" volume data may look similar t ## Examples -**Volume for Each Category** +### Volume for each category This example query will return the volume for each Source Category by data tier. @@ -101,9 +101,9 @@ would produce results such as: Volume for each category -**Volume for Each Collector by Tier** +### Volume for each collector by tier -This example query will return the volume for each Collector. +This example query will return the volume for each collector. ```sql _index=sumologic_volume _sourceCategory = "collector_and_tier_volume" @@ -113,9 +113,9 @@ _index=sumologic_volume _sourceCategory = "collector_and_tier_volume" | sum(gbytes) as gbytes by collector,dataTier ``` -**Volume for a Specific Source** +### Volume for a specific source -The following query returns the message volume for a specific Source. The Source name and Data tier can be supplied within a JSON operation to get the data for that Source. +The following query returns the message volume for a specific source. The source name and data tier can be supplied within a JSON operation to get the data for that source. ```sql _index=sumologic_volume _sourceCategory = "source_and_tier_volume" @@ -127,9 +127,9 @@ _index=sumologic_volume _sourceCategory = "source_and_tier_volume" | fields gbytes ``` -**Volume for a Specific Collector** +### Volume for a specific collector -The following query returns the message volume for a specific Collector. The Collector name and Data tier can be supplied within a JSON operation to get the data for that Collector. +The following query returns the message volume for a specific collector. The collector name and data tier can be supplied within a JSON operation to get the data for that collector. ```sql _index=sumologic_volume _sourceCategory = "collector_and_tier_volume" @@ -141,9 +141,9 @@ _index=sumologic_volume _sourceCategory = "collector_and_tier_volume" | fields gbytes ``` -**Volume for Each Source Host** +### Volume for each source host -The following query returns the message volume for each Source Host. The sourcehost name and data tier can be supplied within a JSON operation to get the data for that sourcehost. +The following query returns the message volume for each source host. The source host name and data tier can be supplied within a JSON operation to get the data for that source host. ```sql _index=sumologic_volume _sourceCategory = "sourcehost_and_tier_volume" @@ -155,9 +155,9 @@ _index=sumologic_volume _sourceCategory = "sourcehost_and_tier_volume" | fields gbytes ``` -**Volume for the Default Index** +### Volume for the default index -The following query returns the message volume for the Default Index. The data tier can be supplied with a JSON operation to filter results of that tier. +The following query returns the message volume for the default index. The data tier can be supplied with a JSON operation to filter results of that tier. ```sql _index=sumologic_volume _sourceCategory = "sourcehost_and_tier_volume" @@ -173,9 +173,7 @@ _index=sumologic_volume _sourceCategory = "sourcehost_and_tier_volume" Sumo Logic populates the Tracing Data Volume Index with a set of JSON-formatted messages every five minutes. The messages contain the volume of tracing billed bytes and span count of Tracing data that your account is ingesting.  -You can query the index to: - -* Get the total tracing data volume (billed bytes/spans count) ingested by collector, source, source name, source category, or source host. +You can query the index to get the total tracing data volume (billed bytes/spans count) ingested by collector, source, source name, source category, or source host. ### Message format @@ -244,7 +242,7 @@ This query produces results like these: #### Tracing volume for a specific collector -This query returns the tracing volume for a specific Collector. The Collector name can be supplied within a JSON operation to get the data for that Collector. +This query returns the tracing volume for a specific collector. The collector name can be supplied within a JSON operation to get the data for that collector. ```sql _index=sumologic_volume _sourceCategory="collector_tracing_volume" @@ -257,7 +255,7 @@ _index=sumologic_volume _sourceCategory="collector_tracing_volume" #### Query for tracing ingestion outliers -This query runs against the tracing volume index and uses the [*outlier*](/docs/search/search-query-language/search-operators/outlier) operator to find timeslices in which your tracing ingestion in billed bytes or span count was greater than the running average by a statistically significant amount. +This query runs against the tracing volume index and uses the [outlier](/docs/search/search-query-language/search-operators/outlier) operator to find timeslices in which your tracing ingestion in billed bytes or span count was greater than the running average by a statistically significant amount. ```sql _index=sumologic_volume _sourceCategory=sourcecategory_tracing_volume @@ -272,7 +270,7 @@ The suggested time range for this query is 7 days. Timeslices can always be redu #### Query for tracing ingestion prediction  -This query runs against the tracing volume index and uses the [*predict*](/docs/search/search-query-language/search-operators/predict) operator to predict future values. +This query runs against the tracing volume index and uses the [predict](/docs/search/search-query-language/search-operators/predict) operator to predict future values. ```sql _index=sumologic_volume _sourceCategory=sourcecategory_tracing_volume @@ -288,4 +286,4 @@ The suggested time range for this query is 7 days. Timeslices can always be redu ### Index retention period -By default, the retention period of the Data Volume index is the same as the retention period of your Default Partition. You can change the retention period by editing the partition that contains the index, `sumologic_volume`. For more information, see [Edit a Partition](/docs/manage/partitions/data-tiers/create-edit-partition). +By default, the retention period of the Data Volume index is the same as the retention period of your default partition. You can change the retention period by editing the partition that contains the index, `sumologic_volume`. For more information, see [Created and Edit a Partition](/docs/manage/partitions/data-tiers/create-edit-partition). diff --git a/docs/manage/ingestion-volume/data-volume-index/metrics-data-volume-index.md b/docs/manage/ingestion-volume/data-volume-index/metrics-data-volume-index.md index 24a1714355..3f0fa940c9 100644 --- a/docs/manage/ingestion-volume/data-volume-index/metrics-data-volume-index.md +++ b/docs/manage/ingestion-volume/data-volume-index/metrics-data-volume-index.md @@ -8,9 +8,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; Sumo Logic populates the Metrics Data Volume Index with a set of JSON-formatted messages every five minutes. The messages contain the volume of metric data points your account is ingesting.  -You can query the index to: - -* Get the total metric data volume (data points) ingested by collector, source, source name, source category, or source host.  +You can query the index to get the total metric data volume (data points) ingested by collector, source, source name, source category, or source host.  :::note You cannot query the index to get storage credits. For information about storage credits, see [Sumo Logic Credits Accounts](/docs/manage/manage-subscription/sumo-logic-credits-accounts). @@ -86,7 +84,7 @@ It returns results like these: ### Metric volume for a specific collector -This query returns the metric volume for a specific Collector. The Collector name can be supplied within using the where operator to get the ingest data for a specific Collector. +This query returns the metric volume for a specific collector. The collector name can be supplied within using the where operator to get the ingest data for a specific collector. ```sql _index=sumologic_volume _sourceCategory="collector_metrics_volume" @@ -99,7 +97,7 @@ _index=sumologic_volume _sourceCategory="collector_metrics_volume" ### Query for metric ingestion outliers  -This query runs against the metrics volume index and uses the [outlier](/docs/search/search-query-language/search-operators/manually-cast-data-string-number) operator to find timeslices in which your metric ingestion in DPM was greater than the running average by a statistically significant amount.  +This query runs against the metrics volume index and uses the [outlier](/docs/search/search-query-language/search-operators/outlier/) operator to find timeslices in which your metric ingestion in DPM was greater than the running average by a statistically significant amount.  ```sql _index=sumologic_volume _sourceCategory=sourcecategory_metrics_volume diff --git a/docs/manage/ingestion-volume/index.md b/docs/manage/ingestion-volume/index.md index 2ffdf1b756..e343eaab8c 100644 --- a/docs/manage/ingestion-volume/index.md +++ b/docs/manage/ingestion-volume/index.md @@ -14,7 +14,7 @@ In this section, we'll introduce the following concepts:
icon

Log Ingestion

-

Learn how the logs will be ingested across all Collectors.

+

Learn how the logs will be ingested across all collectors.

diff --git a/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md b/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md index c2530f25ff..5cd9ecf025 100644 --- a/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md +++ b/docs/manage/ingestion-volume/ingest-budgets/daily-volume.md @@ -1,7 +1,7 @@ --- id: daily-volume title: Daily Volume -description: Control the capacity of daily log ingestion volume sent to Sumo Logic from Collectors. +description: Control the capacity of daily log ingestion volume sent to Sumo Logic from collectors. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -35,36 +35,32 @@ An ingest budget's capacity usage is logged in the Audit Index when the audit th * There is a limit of 100 ingest budgets. * Bytes are calculated in base 2 (binary format, 1024 based). -* Ingest Budgets do not affect [throttling](/docs/manage/ingestion-volume/log-ingestion).  +* Ingest Budgets do not affect [throttling](/docs/manage/ingestion-volume/log-ingestion/#log-throttling).  * [Traces](/docs/apm/traces) are not calculated and are not supported. -* Ingest budgets require the **Manage Ingest Budgets** [role capability](/docs/manage/users-roles/roles/role-capabilities). -* Fields assigned with Field Extraction Rules are not supported in the **scope** of an Ingest Budget. -* **`_budget`** is a reserved keyword used by legacy ingest budgets, do not use this reserved field when creating a new V2 ingest budget. +* Ingest budgets require the Manage Ingest Budgets [role capability](/docs/manage/users-roles/roles/role-capabilities/#data-management). +* Fields assigned with Field Extraction Rules are not supported in the scope of an Ingest Budget. +* `_budget` is a reserved keyword used by legacy ingest budgets. Do not use this reserved field when creating a new V2 ingest budget. * Data is not automatically recovered or ingested later once the capacity tracking is reset. -* In the **scope**, do not wrap values in quotes, unless the value explicitly has quotes. For example, if you want to assign the scope with `_collector` and the name of the Collector is `CloudTrail`, you would assign the scope as `_collector=CloudTrail` instead of `_collector="CloudTrail"`. +* In the scope, do not wrap values in quotes, unless the value explicitly has quotes. For example, if you want to assign the scope with `_collector` and the name of the collector is `CloudTrail`, you would assign the scope as `_collector=CloudTrail` instead of `_collector="CloudTrail"`. ## Budget assignment -The **Scope** feature allows you to assign ingest budgets to your log data using one of the following options: +The scope feature allows you to assign ingest budgets to your log data using one of the following options: -* A field that is enabled in the [Fields](https://github.com/docs/manage/fields) table. Note that fields created by [Field Extraction Rules](/docs/manage/field-extractions/create-field-extraction-rule) are not included in this option. +* A field that is enabled in the [Fields](/docs/manage/fields/) table. Note that fields created by [Field Extraction Rules](/docs/manage/field-extractions/create-field-extraction-rule) are not included in this option. * One of the following built-in metadata fields: `_collector`, `_source`, `_sourceCategory`, `_sourceHost`, or `_sourceName`. -The value supports a single wildcard, such as `_sourceCategory=prod*payment`. - -For example, a **Scope** expression like `_sourceCategory=/dev/catalog/*` implies that all incoming logs ingested into Sumo Logic with a matching `_sourceCategory` will fall under the scope of the given budget. - -See more [budget assignment examples](#budget-assignment-examples) below and review the [rules](#rules) above. +The value supports a single wildcard, such as `_sourceCategory=prod*payment`. For example, a scope expression like `_sourceCategory=/dev/catalog/*` implies that all incoming logs ingested into Sumo Logic with a matching `_sourceCategory` will fall under the scope of the given budget. See more [budget assignment examples](#budget-assignment-examples) below and review the [rules](#rules) above. [V2 ingest budgets](/docs/api/ingest-budget-v2/) provide you the ability to assign budgets to your log data by either [fields](/docs/manage/fields) or the following [built in metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) fields, `_collector`, `_source`, `_sourceCategory`, `_sourceHost`, and `_sourceName`. ## Source type behavior -A few Sources on Hosted Collectors behave differently when instructed to stop collecting data. +A few sources on Hosted Collectors behave differently when instructed to stop collecting data. -* HTTP Sources will drop data requests, yet still return a `200 OK` response. -* AWS S3 based Sources will skip objects.  -* Cloud Syslog Sources will keep the connection open yet drop incoming syslog messages. +* HTTP sources will drop data requests, yet still return a `200 OK` response. +* AWS S3 based sources will skip objects.  +* Cloud Syslog sources will keep the connection open yet drop incoming syslog messages. ## Tools @@ -110,26 +106,19 @@ When hovering over a row in the Ingest Budgets table there are icons that appear 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Data Collection** select **Ingest Budget**. You can also click the **Go To...** menu at the top of the screen and select **Ingest Budget**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Ingest Budgets**. 1. Click the **+ Add Budget** button on the top right of the table. A panel named **Create Ingest Budget** appears to the right of the Ingest Budgets table. 1. Provide the following information, all fields are required except Description. - * **Display Name**. Enter the name you'd like to assign to the new ingest budget. * **Scope**. Define the log data to apply to the ingest budget. See [budget assignment](#budget-assignment) for details and review the [rules](#rules) above. * **Description** is optional. * **Allocated Capacity**. Set the maximum daily ingestion volume you want for the ingest budget. - * **Amount**. Enter a value up to 1023.999. * **Unit**. Select a unit of memory. Bytes are calculated in base 2 (binary format, 1024 based).\ - * **Reset every day at**. Ingest budgets automatically reset their capacity utilization tracking every 24 hours based on the time and time zone you specify. - * **Time**. Set the time of day to reset the capacity tracking. * **Time zone**. Set the time zone of the reset time. * **Action when capacity reached**. Select the action to take when the ingest budget's capacity is reached. All actions are [audited](#audit-index-queries). - - * **Stop** Collecting - Collection stops immediately. There are important differences depending on the [Source type](#source-type-behavior). + * **Stop** Collecting - Collection stops immediately. There are important differences depending on the [source type](#source-type-behavior). * **Keep** Collecting - Collection remains the same. - * **Audit Threshold**. The threshold, as a percentage, of when an ingest budget's capacity usage is logged in the Audit Index. - 1. When you're finished configuring the ingest budget click **Add**. #### Reset ingest budget @@ -156,7 +145,7 @@ You can manually reset a budget at any time to set its capacity utilization tra #### Control ingest by team or service -You can assign Collectors and Sources with [fields](/docs/manage/fields) based on teams and services. For example, a field could be `team=` or `service=`. With these fields assigned, you can create a budget with the scope `team=` to achieve team based budgets. You can leverage Source fields for finer control over the scope of the budget. You can map a model of your deployment or organization to metadata fields and then create ingest budgets with a scope referencing them. +You can assign collectors and sources with [fields](/docs/manage/fields) based on teams and services. For example, a field could be `team=` or `service=`. With these fields assigned, you can create a budget with the scope `team=` to achieve team based budgets. You can leverage source fields for finer control over the scope of the budget. You can map a model of your deployment or organization to metadata fields and then create ingest budgets with a scope referencing them. #### Match against multiple budgets @@ -238,7 +227,7 @@ Next reset: 2020-09-19T00:00:00.000 -0700 #### Audit Index queries -You can schedule the following searches to get alerts when needed, see [scheduled searches](/docs/alerts/scheduled-searches/schedule-search/) for details. +You can schedule the following searches to get alerts when needed, see [Create a Scheduled Search](/docs/alerts/scheduled-searches/schedule-search/) for details. Search for when approaching usage capacity (≥ 85%): @@ -266,9 +255,9 @@ _index=sumologic_audit _sourceName=VOLUME_QUOTA _sourceCategory=account_manageme ### Health events -Health events allow you to keep track of the health of your Collectors, Sources, and Ingest Budgets. You can use them to find and investigate common errors and warnings that are known to cause collection issues. See [Health events](/docs/manage/health-events.md) for details. +Health events allow you to keep track of the health of your collectors, sources, and Ingest Budgets. You can use them to find and investigate common errors and warnings that are known to cause collection issues. See [Health Events](/docs/manage/health-events.md) for details. -Ingest budgets that have exceeded their capacity are placed in an **Error** health state. The following are two common queries used to investigate the health of ingest budgets. +Ingest budgets that have exceeded their capacity are placed in an error health state. The following are two common queries used to investigate the health of ingest budgets. A query to search for all ingest budgets that are over capacity. diff --git a/docs/manage/ingestion-volume/ingest-budgets/minute-volume.md b/docs/manage/ingestion-volume/ingest-budgets/minute-volume.md index a0a760baa2..b2a16fc493 100644 --- a/docs/manage/ingestion-volume/ingest-budgets/minute-volume.md +++ b/docs/manage/ingestion-volume/ingest-budgets/minute-volume.md @@ -18,14 +18,14 @@ An ingest budget's capacity usage is logged in the Audit Index when the audit th * Fields assigned with Field Extraction Rules are not supported in the scope of an ingest budget. * `_budget` is a reserved keyword used by legacy ingest budgets. Do not use this reserved field when creating a new ingest budget. * Data is not automatically recovered or ingested later once the capacity tracking is reset. -* Avoid creating multiple ingest budgets with the same scope. In such a scenario, Ingest budgets whose capacity is reached first is executed. +* Avoid creating multiple ingest budgets with the same scope. In such a scenario, ingest budgets whose capacity is reached first is executed. * In the scope, do not wrap values in quotes, unless the value explicitly has quotes. For example, if you want to assign the scope with `_collector` and the name of the Collector is `CloudTrail`, you would assign the scope as `_collector=CloudTrail` instead of `_collector="CloudTrail"`. ## Budget assignment​ The **Scope** supports the option to assign ingest budgets to your log data by either: -* A Field that is enabled in the [Fields](/docs/manage/fields) table. +* A field that is enabled in the [fields](/docs/manage/fields) table. * One of the following built-in metadata fields: `_collector`, `_source`, `_sourceCategory`, `_sourceHost`, or `_sourceName`. The value supports a single wildcard, such as `_sourceCategory=prod*payment`. @@ -34,7 +34,7 @@ For example, a **Scope** expression like `_sourceCategory=/dev/catalog/*` im ## Source-type behavior​ -A few Sources on Hosted Collectors will behave differently when instructed to stop collecting data. +A few sources on Hosted Collectors will behave differently when instructed to stop collecting data. * HTTP sources will drop data requests, yet still return a `200 OK` response. * AWS S3-based sources will skip objects. @@ -49,7 +49,7 @@ A few Sources on Hosted Collectors will behave differently when instructed to st 1. Under **Create Ingest Budget**, provide the following information. * **Name**. Enter the name you'd like to assign to the new ingest budget. * **Description** is optional. - * **Scope**. Define the log data to apply to the ingest budget. See budget assignment for details and review the rules above. Once scope is defined, you can click on the hyperlink to view the ingest rate of your defined scope. Sumo Logic populates a query to run across all Data Tiers to find the right Ingestion trend. + * **Scope**. Define the log data to apply to the ingest budget. See budget assignment for details and review the rules above. Once scope is defined, you can click on the hyperlink to view the ingest rate of your defined scope. Sumo Logic populates a query to run across all data tiers to find the right Ingestion trend. * **Capacity**. This sets your budget capacity. 1. Select **Minute Volume**. 1. **Amount.** Enter a value up to 1023.999. diff --git a/docs/manage/ingestion-volume/log-ingestion.md b/docs/manage/ingestion-volume/log-ingestion.md index a568c1d845..c72974c0da 100644 --- a/docs/manage/ingestion-volume/log-ingestion.md +++ b/docs/manage/ingestion-volume/log-ingestion.md @@ -1,7 +1,7 @@ --- id: log-ingestion title: Log Ingestion -description: When designing your deployment, consider how logs will be ingested across Collectors in your account. +description: When designing your deployment, consider how logs will be ingested across collectors in your account. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -9,7 +9,7 @@ import Iframe from 'react-iframe'; The rate of data creation is rarely constant. Whether your organization sees seasonal spikes, or if a new feature or product line produces huge increases in activity, Sumo Logic meets the needs of your organization, known or unknown, while maintaining the search performance you rely on. -When designing your deployment, it’s important to consider how logs will be ingested across Collectors in your account. +When designing your deployment, it’s important to consider how logs will be ingested across collectors in your account. :::tip [Ingest budgets](/docs/manage/ingestion-volume/ingest-budgets) can limit your ingestion volume. @@ -20,13 +20,13 @@ When designing your deployment, it’s important to consider how logs will be in Sumo Logic imposes account caps on uploads to better protect your account from exceeding data limits. * Storage usage is calculated by taking the average of your total storage usage in the current billing cycle. For example, if your storage limit is 500TB, you will be charged for extra on-demand storage only if the average of your total storage for the month exceeds 500TB at the end of your billing cycle, or if there is an excessive spike in usage (see the next bullet item). -* Storage usage can exceed between 4 times to 10 times the daily maximum (depending on account size). Even if the cap is exceeded, log data is kept safely at the Collector level until quota is made available, at which time the data is ingested.  +* Storage usage can exceed between 4 times to 10 times the daily maximum (depending on account size). Even if the cap is exceeded, log data is kept safely at the collector level until quota is made available, at which time the data is ingested.  -Log data may not be kept when sent via HTTP Sources or Cloud Syslog Sources, as they may not have the caching and retry mechanisms that are built into Sumo Logic Collectors. +Log data may not be kept when sent via HTTP sources or Cloud Syslog sources, as they may not have the caching and retry mechanisms that are built into Sumo Logic collectors. * Ingestion rate is calculated by taking the average of your daily ingestion rate in the current billing cycle. For example, if your contracted daily ingestion rate is 100GB, you will be charged for on-demand usage only if average daily ingestion is more than 100GB at the end of your billing cycle. * Sumo Logic free accounts can expect slightly different behavior. If a Sumo Logic Free account regularly exceeds the cap, the account is temporarily disabled until quota becomes available (or until the account is upgraded). -* Sumo Logic accounts can be upgraded at any time to allow for additional quota. Contact [Sumo Logic Sales](mailto:sales@sumologic.com) to customize your account to meet your organization's needs. +* Sumo Logic accounts can be upgraded at any time to allow for additional quota. Contact [Sumo Logic Sales](https://support.sumologic.com/support/s/) to customize your account to meet your organization's needs. :::important [Compressed files](/docs/send-data/hosted-collectors/http-source/logs-metrics/#compressed-data) are decompressed before they are ingested, so they are ingested at the decompressed file size rate. @@ -34,10 +34,10 @@ Log data may not be kept when sent via HTTP Sources or Cloud Syslog Sources, as ## Log Throttling -Part of managing spikes in activity is properly slowing the rate of ingestion while the demand is at its peak, known as throttling. (This section pertains to logs. For metrics, see [Metrics Throttling](../../metrics/manage-metric-volume/metric-throttling.md)). +Part of managing spikes in activity is properly slowing the rate of ingestion while the demand is at its peak, known as throttling. (This section pertains to logs. For metrics, see [Metric Throttling](../../metrics/manage-metric-volume/metric-throttling.md)). :::note -All accounts are subject to throttling, regardless of plan type (Cloud Flex or Cloud Flex Credits) or [Data Tier](/docs/manage/partitions/data-tiers). +All accounts are subject to throttling, regardless of plan type (Cloud Flex or Cloud Flex Credits) or [data tier](/docs/manage/partitions/data-tiers). ::: :::sumo Micro Lesson @@ -58,7 +58,7 @@ Watch this micro lesson to learn more about throttling. ::: -Throttling is enabled across all Collectors in an account. Sumo Logic measures the amount of data already committed to uploading against the number of previous requests and available resources (quota) in an account. In other words, Sumo Logic compares the current ingestion with the rate of ingest using a per minute rate that can be derived from the contracted Daily GB/day rate. +Throttling is enabled across all collectors in an account. Sumo Logic measures the amount of data already committed to uploading against the number of previous requests and available resources (quota) in an account. In other words, Sumo Logic compares the current ingestion with the rate of ingest using a per minute rate that can be derived from the contracted daily GB/day rate. :::important Throttling is not related to the monthly quota for an account. An account can be throttled when it exceeds the multiplier of the per-minute ingestion rate while being well within the monthly ingestion quota. @@ -79,17 +79,17 @@ To provide an example with a 10GB per day account, the average per minute rate ## How does throttling affect ingestion? -In the case of Installed Collectors with a Local File Source and S3 Hosted Collectors, Sumo Logic instructs the Collector (Installed or Hosted) on the quota limit, and tells it to delay ingestion until the quota is available. As a result, users will be unable to search for current data when throttling is happening, since the rate of uploads may be slowed from local or S3 hosted collectors but there is no dropping of ingested data. Unfortunately, we do not have the same ability with the sending of data for HTTP sources and endpoints. Any HTTP sources will get a response to any post requests with a "429 - Too Many Requests" message. When this occurs, the sending client would then be responsible for retrying to send that data as quota becomes available. +In the case of Installed collectors with a Local File source and S3 Hosted Collectors, Sumo Logic instructs the collector (Installed or Hosted) on the quota limit, and tells it to delay ingestion until the quota is available. As a result, users will be unable to search for current data when throttling is happening, since the rate of uploads may be slowed from local or S3 hosted collectors but there is no dropping of ingested data. Unfortunately, we do not have the same ability with the sending of data for HTTP sources and endpoints. Any HTTP sources will get a response to any post requests with a "429 - Too Many Requests" message. When this occurs, the sending client would then be responsible for retrying to send that data as quota becomes available. -In the case of [Cloud Syslog Sources](/docs/send-data/hosted-collectors/cloud-syslog-source), similar to HTTP sources, incoming data will be dropped since the Cloud Syslog functions as a listener and cannot even return the 429 error. +In the case of [Cloud Syslog sources](/docs/send-data/hosted-collectors/cloud-syslog-source), similar to HTTP sources, incoming data will be dropped since the Cloud Syslog functions as a listener and cannot even return the 429 error. -Throttling also prevents one Collector from uploading more data than others to the point where all data is being ingested from one Collector. +Throttling also prevents one collector from uploading more data than others to the point where all data is being ingested from one collector. When a collector is experiencing throttling, the throttling slows the rate at which the collector uploads data. If the upload rate is slower than the rate at which data is generated, then the collector will automatically queue the excess data on disk. When the quota becomes available, the queued data will be uploaded. -## How do I know which Collector is contributing to excess ingestion? +## How do I know which collector is contributing to excess ingestion? -You can use the [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) and the [Data Volume App](/docs/integrations/sumo-apps/data-volume) to help determine the ingestion per Collector, Source, Source Category, View, or Partition. +You can use the [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) and the [Data Volume App](/docs/integrations/sumo-apps/data-volume) to help determine the ingestion per collector, source, source category, view, or partition. ## How can I be alerted when throttling takes place? @@ -97,8 +97,8 @@ If the audit index is enabled you can set up a scheduled search to send an alert ## Ingestion with file changes -When a file is updated, the way it is ingested depends on the type of Collector: +When a file is updated, the way it is ingested depends on the type of collector: * For Installed Collectors, Sumo Logic can ingest only the new data. For example, if Sumo Logic ingests a log file with 25 lines, and then additional messages are added to the file, the next ingestion will start at line 26. -* For Hosted Collectors with S3 Sources, an ingested file is treated as a single object and is not expected to be updated or appended with new data. If an existing file is updated in any way, it is considered to be a new object and is ingested again in full. Updating existing objects in S3 Sources can result in duplicate messages, depending on the nature of the update. -* Treatment of Hosted Collectors with other Source types is based on customer configuration. +* For Hosted Collectors with S3 sources, an ingested file is treated as a single object and is not expected to be updated or appended with new data. If an existing file is updated in any way, it is considered to be a new object and is ingested again in full. Updating existing objects in S3 sources can result in duplicate messages, depending on the nature of the update. +* Treatment of Hosted Collectors with other source types is based on customer configuration. diff --git a/docs/manage/ingestion-volume/monitor-ingestion-receive-alerts.md b/docs/manage/ingestion-volume/monitor-ingestion-receive-alerts.md index 668da0f8a0..fd8fa0639a 100644 --- a/docs/manage/ingestion-volume/monitor-ingestion-receive-alerts.md +++ b/docs/manage/ingestion-volume/monitor-ingestion-receive-alerts.md @@ -6,9 +6,9 @@ description: Add scheduled searches that monitor log ingestion and send alerts. import useBaseUrl from '@docusaurus/useBaseUrl'; -This article describes how to configure ingest alerts that you can schedule to get timely information about ingestion usage or throttling. The information in this article applies to [Cloud Flex Legacy accounts](/docs/manage/manage-subscription/cloud-flex-legacy-accounts/). To monitor ingestion for Sumo Logic Credits accounts, see [Sumo Logic Credits Account Overview](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#account-overview). +This article describes how to configure ingest alerts that you can schedule to get timely information about ingestion usage or throttling. The information in this article applies to [Cloud Flex Legacy accounts](/docs/manage/manage-subscription/cloud-flex-legacy-accounts/). To monitor ingestion for Sumo Logic Credits accounts, see Sumo Logic Credits [Account Overview](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#account-overview). -With the exception of the [Throttling alert](#throttling-alert) described below, these alerts apply to logs, not metrics. For metrics volume queries, use the [Metrics Data Volume Index](data-volume-index/metrics-data-volume-index.md). +With the exception of the [throttling alert](#throttling-alert) described below, these alerts apply to logs, not metrics. For metrics volume queries, use the [Metrics Data Volume Index](data-volume-index/metrics-data-volume-index.md). Some of the alerts are based on your billing period or ingest plan limit. You must make the appropriate changes for the alert to function and return valid results. The alerts approximate ingest rates and might not precisely match the actual ingest volume used for invoicing purposes. @@ -22,7 +22,7 @@ You must update all of the indicated fields for the search to save successfully #### Setup -1. Enable the Data Volume Index. See [Enable and Manage the Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. +1. Enable the Data Volume Index. See [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. 2. Substitute the correct values of `X` for the following parameters in the search query. For the billing start and end values, use the day of the month. For example, in the screenshot below, the value for `billing_start` is 17 so the updated line from the search becomes `17 as billing_start`. ``` X as billing_start @@ -79,7 +79,7 @@ _index=sumologic_volume and sizeInBytes and _sourceCategory="sourcename_volume" After completing the setup, schedule the search to run:  -1. Schedule Query you created in Setup. For details, see [Schedule a Search](../../alerts/scheduled-searches/schedule-search.md). +1. Schedule Query you created in Setup. For details, see [Create a Scheduled Search](../../alerts/scheduled-searches/schedule-search.md). 1. Set the **Run frequency** to **Daily**. 1. Enter **-32d** for the time range.
Time range monthly plan 1. Make sure Alert Condition is set to **Send Notification** if the **Alert Condition** is met: **Number of results** greater than **0.** @@ -95,7 +95,7 @@ You must update the indicated field for the search to be successfully saved. #### Setup -1. Enable the Data Volume Index. See [Enable and Manage the Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. +1. Enable the Data Volume Index. See [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. 1. Substitute the correct value of `X` for the following parameter in the search query (see entry in yellow in the query below). ```sql X as daily_plan_size @@ -124,7 +124,7 @@ _index=sumologic_volume sizeInBytes After completing the setup steps above, schedule the search to run, as follows.   -1. Schedule the query you created in the previous step (**Query**). For details, see [Schedule a Search](../../alerts/scheduled-searches/schedule-search.md). +1. Schedule the query you created in the previous step (**Query**). For details, see [Create a Scheduled Search](../../alerts/scheduled-searches/schedule-search.md). 1. Set the **Run frequency** to **Daily**. 1. Set time range value to **Last 24 Hours**.
Time range daily plan limit 1. Make sure Alert Condition is set to **Send Notification** if the **Alert Condition** is met: **Number of results** greater than **0.** @@ -133,12 +133,12 @@ After completing the setup steps above, schedule the search to run, as follows. This hourly alert is generated when both of the following occur: -* Ingest for any `_sourceCategory` in your account has a 50% spike compared with the maximum log ingest for the same `_sourceCategory` over the **last four weeks** (comparison is with the same hour and day of week). +* Ingest for any `_sourceCategory` in your account has a 50% spike compared with the maximum log ingest for the same `_sourceCategory` over the last four weeks (comparison is with the same hour and day of week). * The log volume ingested by the `_sourceCategory` represents at least 25 % of the total data ingested within the hour. #### Setup -1. Enable the Data Volume Index. See [Enable and Manage the Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. +1. Enable the Data Volume Index. See [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. 1. (Optional) To adjust the sensitivity of this alert, change either of the values from the following line of the query: ```sql | where pct_increase > 30 and ingest_weight\> 30 @@ -147,9 +147,9 @@ This hourly alert is generated when both of the following occur: ```sql | where pct_increase > 50 and ingest_weight\> 30 ``` -1. (Optional) To change the alert to evaluate a spike in a Collector or Source, do either of the following:  - * To generate an alert on a spike in ingest for a Collector, change the first line of the query replacing `_sourceCategory="sourcecategory_volume"` with `_sourceCategory="collector_volume"` - * To generate an alert on a spike in ingest for a Source, change the first line of the query replacing `_sourceCategory="sourcecategory_volume"` with `_sourceCategory="source_volume"` +1. (Optional) To change the alert to evaluate a spike in a collector or source, do either of the following:  + * To generate an alert on a spike in ingest for a collector, change the first line of the query replacing `_sourceCategory="sourcecategory_volume"` with `_sourceCategory="collector_volume"` + * To generate an alert on a spike in ingest for a source, change the first line of the query replacing `_sourceCategory="sourcecategory_volume"` with `_sourceCategory="source_volume"` #### Query @@ -173,7 +173,7 @@ _index=sumologic_volume sizeInBytes _sourceCategory="sourcecategory_volume" After completing the setup steps above, schedule the search to run, as follows.   -1. Schedule the query you just created in Setup. For details, see [Schedule a Search](../../alerts/scheduled-searches/schedule-search.md). +1. Schedule the query you just created in Setup. For details, see [Create a Scheduled Search](../../alerts/scheduled-searches/schedule-search.md). 1. Set the **Run frequency** to **Hourly**. 1. Enter **-65m -5m** for the time range.
Time range usage spike 1. Make sure Alert Condition is set to **Send Notification** if the **Alert Condition** is met: **Number of results** greater than **0.** @@ -182,7 +182,7 @@ After completing the setup steps above, schedule the search to run, as follows. ## Data not sent alert -This hourly alert will notify you if any of your Collectors have not sent log data for the last 24 hours (-24h). Because this alert will trigger if **any** Collectors do not send data in the specified time range, we recommend that you verify that all your Collectors are sending data before you set this alert and that you extend the time range if 24 hours is not long enough for your data to collect. +This hourly alert will notify you if any of your collectors have not sent log data for the last 24 hours (-24h). Because this alert will trigger if *any* collectors do not send data in the specified time range, we recommend that you verify that all your collectors are sending data before you set this alert and that you extend the time range if 24 hours is not long enough for your data to collect. :::note This type of alert isn't suitable for ephemeral environments and can send false positives. @@ -190,10 +190,10 @@ This type of alert isn't suitable for ephemeral environments and can send false #### Setup -**Prerequisite**. All collectors must be sending data **before** you set this alert. This alert will trigger if *any* collectors do not send data in the specified time range. If you want to identify collectors that are not ingesting for a long time or have not ingested at all, you can use the [Collector API](/docs/api/collector-management/collector-api-methods-examples) +**Prerequisite**. All collectors must be sending data *before* you set this alert. This alert will trigger if *any* collectors do not send data in the specified time range. If you want to identify collectors that are not ingesting for a long time or have not ingested at all, you can use the [collector API](/docs/api/collector-management/collector-api-methods-examples) attributes `alive` and `LastSeenAlive`. -1. Enable the Data Volume Index.  See [Enable and Manage the Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. +1. Enable the Data Volume Index.  See [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index) for instructions. 1. (Optional) Depending on how busy your collectors are, you can modify the following alert threshold: ```sql | where mins_since_last_logs\>= 60 @@ -219,15 +219,15 @@ _index=sumologic_volume sizeInBytes _sourceCategory="collector_volume" | format ("%s Has not collected data in the past 60 minutes", collector) as message ``` -You can run a similar query across Sources, sourceHosts, sourceNames, source categories, or views, by changing the entry for `"collector_volume"` in the search scope keyword line to:`"source_volume"` for Sources, `"sourcehost_volume"`for sourceHosts, `"sourcename_volume"` for sourceNames, `"sourceCategory_volume"` for sourceCategories, or `"view_volume"` for views.  +You can run a similar query across sources, sourceHosts, sourceNames, source categories, or views, by changing the entry for `"collector_volume"` in the search scope keyword line to:`"source_volume"` for sources, `"sourcehost_volume"` for sourceHosts, `"sourcename_volume"` for sourceNames, `"sourceCategory_volume"` for sourceCategories, or `"view_volume"` for views.  -If you do not want the results of the query across Sources or source categories to be called "collector", you can replace all three instances of "collector" with a different field name. +If you do not want the results of the query across sources or source categories to be called "collector", you can replace all three instances of "collector" with a different field name. #### Scheduling After completing the setup steps, you'll need to create a monitor.  -1. Create a monitor corresponding to the query you've created above ([learn more](/docs/alerts/monitors/create-monitor)). +1. [Create a monitor](/docs/alerts/monitors/create-monitor) corresponding to the query you've created above. 1. Set the **Run frequency** to **Hourly**. 1. Set a time range. The default is **Last 24 hours**. If you need to allow for more time because some collectors do not typically ingest data that often, specify a longer time range. For example, seven days.
Alert 1. Make sure Alert Condition is set to **Send Notification** if the **Alert Condition** is met: **Number of results** greater than **0**. @@ -247,12 +247,12 @@ After completing the setup steps, you'll need to create a monitor.  This alert is automatically generated when your account has entered a throttled state (induced by metrics or logs) in the last 15 minutes. The alert runs every 15 minutes and covers a 15-minute period. :::note -All accounts are subject to throttling, regardless of plan type (Cloud Flex or Cloud Flex Credits) or [Data Tier](/docs/manage/partitions/data-tiers). +All accounts are subject to throttling, regardless of plan type (Cloud Flex or Cloud Flex Credits) or [data tier](/docs/manage/partitions/data-tiers). ::: #### Setup -Enable the Audit Index. See [Enable and Manage the Audit Index](/docs/manage/security/audit-indexes/audit-index#enable-the-audit-index) for instructions. +Enable the Audit Index. See [Enable the audit Index](/docs/manage/security/audit-indexes/audit-index#enable-the-audit-index) for instructions. #### Query @@ -264,7 +264,7 @@ _index=sumologic_audit _sourceCategory=account_management _sourceName=VOLUME_QUO After completing the setup steps above, schedule the search to run, as follows.   -1. Schedule the query you just created in Setup. For details, see [Schedule a Search](../../alerts/scheduled-searches/schedule-search.md). +1. Schedule the query you just created in Setup. For details, see [Create a Scheduled Search](../../alerts/scheduled-searches/schedule-search.md). 1. Set the **Run frequency** to **Every 15 Minutes**. 1. Set the time range to the **Last 15 Minutes**.
Time range throttling alert 1. Make sure Alert Condition is set to **Send Notification** if the **Alert Condition** is met: **Number of results** greater than **0**. diff --git a/docs/manage/manage-subscription/cloud-flex-legacy-accounts.md b/docs/manage/manage-subscription/cloud-flex-legacy-accounts.md index 326e69bc03..8ea5fc8c44 100644 --- a/docs/manage/manage-subscription/cloud-flex-legacy-accounts.md +++ b/docs/manage/manage-subscription/cloud-flex-legacy-accounts.md @@ -25,7 +25,7 @@ Sumo Logic provides flexible account types within its Cloud Flex Legacy packagi * **Professional** accounts scale to meet your growing needs for user licenses, data retention, and volume options based on subscription. You can [upgrade](/docs/manage/manage-subscription/upgrade-account/upgrade-cloud-flex-legacy-account) from a Professional to an Enterprise account at any time. * **Enterprise** accounts, the premier Sumo Logic log management solution, are built to fit your organization's needs for data volume, data retention, and user management requirements. Enterprise accounts include [Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets) and [SAML-based SSO](/docs/manage/security/saml/set-up-saml).  :::note - [Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets/) are only available for Enterprise accounts. Ingest budgets control the capacity of daily log ingestion volume sent to Sumo Logic from Collectors. + [Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets/) are only available for Enterprise accounts. Ingest budgets control the capacity of daily log ingestion volume sent to Sumo Logic from collectors. ::: The following table provides a summary list of key features by package accounts.  @@ -44,7 +44,7 @@ The following table provides a summary list of key features by package accounts. | Log Data storage (Cloud Flex Accounts) | 4GB | 30GB | ✓ | ✓ | | Log Data volume | 500MB per day | 1GB per day* | ✓ | ✓ | | [LogReduce](/docs/search/behavior-insights/logreduce) | ✓ | ✓ | ✓ | ✓ | -| [Lookup Tables](/docs/search/lookup-tables) | none | Varies by the account type being trialed | 10 tables per org | 100 tables per org | +| [Lookup tables](/docs/search/lookup-tables) | none | Varies by the account type being trialed | 10 tables per org | 100 tables per org | | Metrics | | ✓ | ✓ | ✓ | | Metrics data retention | | ✓ | ✓ | ✓ | | Metrics data retention | | ✓ | ✓ | ✓ | @@ -57,10 +57,10 @@ The following table provides a summary list of key features by package accounts. | Users (Classic Accounts) | Three users | 20 users* | ✓ | ✓ | | User and Role APIs | ✓ | ✓ | ✓ | ✓ | -\* Contact [Sumo Logic Sales](mailto:sales@sumologic.com) to customize your account to meet your organization's needs. +\* Contact [Sumo Logic Sales](https://support.sumologic.com/support/s/) to customize your account to meet your organization's needs. :::important -It's important to keep track of your daily usage. For tips on how to monitor and limit the data you're sending to Sumo Logic, see [Manage Ingestion.](../ingestion-volume/log-ingestion.md) +It's important to keep track of your daily usage. For tips on how to monitor and limit the data you're sending to Sumo Logic, see [Log Ingestion.](/docs/manage/ingestion-volume/log-ingestion/) ::: ## Account Limitations and Guidelines @@ -73,47 +73,47 @@ An account that is within its limits is defined as using **Reserved Capacity.** ### Collection Limitations -* The maximum number of Collectors allowed per organization is 10,000. -* The maximum number of Sources allowed on a Collector is 1,000. +* The maximum number of collectors allowed per organization is 10,000. +* The maximum number of Sources allowed on a collector is 1,000. * The maximum number of Processing Rules allowed on a Source is 100. ### Continuous Query Limitations -For all Sumo Logic account types (except for Sumo Logic Free) there is an overall limit of 200 continuous queries per Sumo Logic organization that can be run at one time. This includes Dashboard Panels, Alerts, and all other types of queries.  +For all Sumo Logic account types (except for Sumo Logic Free) there is an overall limit of 200 continuous queries per Sumo Logic organization that can be run at one time. This includes dashboard panels, alerts, and all other types of queries.  ### Data Limits for Metrics -For billing and reporting purposes, data volume for metrics is measured in Data Points per Minute (DPM). When the DPM limit is exceeded, data is cached on the host and the Source is throttled. The calculation of DPM varies according to the type of metric Source. For details, see [Data Limits for Metrics](../../metrics/manage-metric-volume/data-limits-for-metrics.md).  +For billing and reporting purposes, data volume for metrics is measured in data points per minute (DPM). When the DPM limit is exceeded, data is cached on the host and the Source is throttled. The calculation of DPM varies according to the type of metric Source. For details, see [Data Limits for Metrics](/docs/metrics/manage-metric-volume/data-limits-for-metrics/).  ## Important notes on Sumo Logic Free accounts Using a Free account is a great way to get to know Sumo Logic. While you're trying the Sumo Logic service, here are important points to be aware of: * Free accounts run on seven-day intervals. This means that over the course of seven days, you cannot ingest more than a total of 3.5 GB of log data. -* If you begin to reach the 500 MB daily limit, Sumo Logic sends an email to let you know. You can take action to [reduce the amount of data](../partitions/manage-indexes-variable-retention.md) you're uploading in order to stay below the limit. +* If you begin to reach the 500 MB daily limit, Sumo Logic sends an email to let you know. You can take action to [reduce the amount of data](/docs/manage/partitions/manage-indexes-variable-retention/) you're uploading in order to stay below the limit. * If the 500 MB limit is surpassed, you'll receive an email letting you know that data in the Sumo Logic Cloud can no longer be searched (but additional data is still collected). However, if the data limit is fully exceeded, data collection stops (in addition to search being disabled). Disabled features will be available after your usage falls below 4 GB when averaged over seven days (this could take one day, or up to seven days, depending on the amount of data you've uploaded and where you've uploaded it). * In extreme situations, Free accounts may be disabled if the data volume continues to exceed the limits. -* Free accounts are limited to 20 continuous queries, including [Dashboard Panels](/docs/dashboards/about). -* Because Free accounts run on seven-day intervals, [Dashboard Panel](/docs/dashboards/about) queries may not use a time range longer than seven days. +* Free accounts are limited to 20 continuous queries, including [dashboard panels](/docs/dashboards/about). +* Because Free accounts run on seven-day intervals, [dashboard panel](/docs/dashboards/about) queries may not use a time range longer than seven days. * For Sumo Logic Apps, Free accounts are limited to install the [Log Analysis QuickStart app](/docs/integrations/sumo-apps/log-analysis-quickstart). * The limitations of a Free account cannot be changed, but you can upgrade to a Professional account at any time. -* For information on throttling and account caps, see [Manage Ingestion](../ingestion-volume/log-ingestion.md). +* For information on throttling and account caps, see [Log Ingestion](/docs/manage/ingestion-volume/log-ingestion/). ### Important notes on Sumo Logic Trial accounts Using a Trial account is a great way to learn about the advanced features of Sumo Logic. While you're trying the Sumo Logic service, there are a few points that are important to be aware of: * Trial accounts are allowed to burst up to 5 GB a day for short periods. -* For information on throttling and account caps, see [Manage Ingestion](../ingestion-volume/log-ingestion.md). +* For information on throttling and account caps, see [Log Ingestion](/docs/manage/ingestion-volume/log-ingestion/). ## Account Page The **Account** page displays information about your Sumo Logic organization, account type, billing period, and the number of users. It also allows the account owner to reassign the role of the Account Owner. -[Data Tiers](/docs/manage/partitions/data-tiers) provide economic flexibility by aligning your analytics to the value of your data. By using the Continuous and Frequent tiers, you can appropriately segment your data by use case and analytics needs, thus enabling you to optimize your analytics investments. +[Data tiers](/docs/manage/partitions/data-tiers) provide economic flexibility by aligning your analytics to the value of your data. By using the Continuous and Frequent tiers, you can appropriately segment your data by use case and analytics needs, thus enabling you to optimize your analytics investments. :::note -[Data Tiers](/docs/manage/partitions/data-tiers) must be enabled on your plan to be able to access this functionality. For more information, contact your Sumo Logic Account Representative. +[Data tiers](/docs/manage/partitions/data-tiers) must be enabled on your plan to be able to access this functionality. For more information, contact your Sumo Logic Account Representative. ::: The top panel of the Account Overview page provides an at-a-glance view of your account information: @@ -125,7 +125,7 @@ The top panel of the Account Overview page provides an at-a-glance view of your * **Continuous Ingest.** Shows your daily capacity for log ingest to the Continuous Data Tier, and your average daily usage. If the daily ingest average over the billing cycle is above your capacity, you will be charged the on-demand rate for the difference. * **Frequent Ingest**. Shows your daily capacity for log ingest to the Frequent Data Tier, and your average daily usage. If the daily ingest average over the billing cycle is above your capacity, you will be charged the on-demand rate for the difference. * **Metrics Ingest**. Shows your daily capacity for metrics ingest, and your average daily usage, both in DPM. If the daily ingest average over the billing cycle is above your capacity, you will be charged the on-demand rate for the difference. If your daily usage average is higher than your capacity, you will be charged the on-demand rate for the difference. -* **Storage.** Shows your daily storage capacity and average daily storage usage. You can adjust capacity use by modifying your [retention periods](../partitions/manage-indexes-variable-retention.md). +* **Storage.** Shows your daily storage capacity and average daily storage usage. You can adjust capacity use by modifying your [retention periods](/docs/manage/partitions/manage-indexes-variable-retention/). * **Auto Refresh Dashboard Panels.** Show the number of auto refresh dashboard panels you have set up. Compares the number allowed to the number already in use. For example, out of 200, 174 have been used. To view the Account page, do the following: @@ -134,13 +134,13 @@ To view the Account page, do the following: 1. [**New UI**](/docs/get-started/sumo-logic-ui/). In the main Sumo Logic menu select **Administration**, and then under **Account** select **Account Overview**. You can also click the **Go To...** menu at the top of the screen and select **Account Overview**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Administration > Account > Account Overview**.
The Account Overview tab of the Account page is shown by default. :::note -You must have a role that grants you the [Account Overview capability](/docs/manage/users-roles/roles/role-capabilities/) to view the Account Overview tab. +You must have a role that grants you the [Account Overview capability](/docs/manage/users-roles/roles/role-capabilities/#data-management) to view the Account Overview tab. ::: Cloud Flex account page :::note -If you are your Sumo Logic account owner, your Account page also displays a **Manage Organization** section. For information on these options, see [Manage Organization](/docs/manage/manage-subscription/create-and-manage-orgs/manage-org-settings). +If you are your Sumo Logic account owner, your Account page also displays a **Manage Organization** section. For information on these options, see [Manage Organization Settings](/docs/manage/manage-subscription/create-and-manage-orgs/manage-org-settings). ::: Manage org links diff --git a/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-flex.md b/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-flex.md index 80158818a2..e9b09c2da0 100644 --- a/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-flex.md +++ b/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-flex.md @@ -17,13 +17,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; This feature is not enabled by default. If you’d like to have it enabled, contact your Sumo Logic account executive. ::: -This topic has information about Sumo Logic’s Organizations (“Sumo Orgs”) feature for Flex licensing, which you can use to create and manage orgs. The term *parent org* refers to the organization from which you create a new organization, while *child orgs* are the organizations you create. +This topic has information about Sumo Logic’s organizations feature for Flex licensing, which you can use to create and manage orgs. The term *parent org* refers to the organization from which you create a new organization, while *child orgs* are the organizations you create. -Sumo Orgs allows you to logically group, provision, and centrally manage and monitor the credits usage of multiple orgs. +Sumo Logic organitztions allow you to logically group, provision, and centrally manage and monitor the credits usage of multiple orgs. -When you create a child org, you provision it with credits, based on the ingest volume you estimate for the org. When you provision a child org you use a Credits Calculator to estimate and allocate required credits for each product variable.  +When you create a child org, you provision it with credits, based on the ingest volume you estimate for the org. When you provision a child org you use a credits calculator to estimate and allocate required credits for each product variable.  -We refer to your estimates of ingest capacity required for each product variable as *baselines*. Sumo Logic’s throttling multipliers for logs and metrics are based on these estimates. For example, if you estimate 1GB usage for logs and specify that as the baseline when you create the org, Sumo Logic will start [throttling](/docs/manage/ingestion-volume/log-ingestion.md) when ingestion to the org reaches 4 to 10 times the baseline. The multiplier depends on your account size. +We refer to your estimates of ingest capacity required for each product variable as *baselines*. Sumo Logic’s throttling multipliers for logs and metrics are based on these estimates. For example, if you estimate 1GB usage for logs and specify that as the baseline when you create the org, Sumo Logic will start [throttling](/docs/manage/ingestion-volume/log-ingestion/#log-throttling) when ingestion to the org reaches 4 to 10 times the baseline. The multiplier depends on your account size. Users that have the required [role capabilities](/docs/manage/manage-subscription/create-and-manage-orgs#requirements-for-creating-and-managing-orgs) can create child orgs under a parent org, and manage and monitor the allocation and consumption of Sumo Logic credits across orgs, and for each child org. This functionality is available in the Sumo Logic UI in the **Organizations** tab and also in the [Organizations Management API](https://organizations.sumologic.com/docs/). @@ -39,7 +39,7 @@ You cannot delete a new child org once it is created. 1. Click **+ Add Organization**.
add-org 1. The **Create New Organization** pane appears.
create-new-org.png -### Allocate Credits +### Allocate credits 1. **Plan Type**. Select your organization's plan type.  1. **Deployment**. Select a Sumo Logic deployment from the list. @@ -61,7 +61,7 @@ You cannot delete a new child org once it is created. * **Metrics**. Enter estimated daily metric data points per minute (DPM) ingestion.
calculator 1. **Cloud SIEM Enterprise**. Click the checkbox to enable Cloud SIEM. When the **Cloud Log Ingest** field appears, enter a value in GB. :::note - Provisioning Cloud SIEM can take up to 24 hours. See [Monitor Cloud SIEM Provisioning](#monitor-cloud-siem-provisioning), below. + Provisioning Cloud SIEM can take up to 24 hours. See [Monitor Cloud SIEM provisioning](#monitor-cloud-siem-provisioning), below. ::: 1. As you enter the ingestion estimates, the number of credits required for the specified ingestion levels will be incremented. 1. The calculator now shows the recommended credit allocation, which provides you a suggestion on how many credits you would need for the child org. This is calculated based on the baseline added, the burndowns in your contract, and the days remaining in your contract. @@ -70,7 +70,7 @@ You cannot delete a new child org once it is created. 1. **Credits to be allocated**. The recommended credits for this child org will be displayed once you set the baseline. You can add or reduce the credit based on your requirement. 1. **Remaining Credits (Parent)**. Total balance credits available in the parent org after using the credits for child org. -### Basic Details +### Basic details 1. **Organization Name**. Enter a unique name for the org. 1. **Account Owner Email**. Enter the name of the account owner. @@ -98,7 +98,7 @@ To change an org's credits allocation: **Example 2**: Suppose you need to increase credits for your child org. The image below shows that you have used 35 out of 31,026 credits allocated to your child org. To add more credits, select **Credits to be Added** and enter the additional credits required.
modify-allocation-recommendatio-example-2 1. If you want to modify the baseline, click **View Baseline**. The credits calculator appears. - 1. Click **Edit** and follow the [step 4 in Allocate Credits](#allocate-credits) to update the credits allocation.
edit-baseline + 1. Click **Edit** and follow step 4 in [Allocate credits](#allocate-credits) to update the credits allocation.
edit-baseline 1. Once you save the new baseline, you can view the recommended value in the **Credits to be Added/Reduced** section.
baseline-credits-to-add 1. (Optional) Click **View Details** to view the detailed breakdown of additional credits required value.
baseline-view-details 1. Click **Save** once you finish editing the credit values. @@ -127,9 +127,9 @@ You can view the aggreagte usage for all child orgs across usage category in the * **Metrics Ingest**. Credits used for metrics ingested. * **Data Forwarding**. For more information, see [Data Forwarding](/docs/manage/data-forwarding/). * **Storage**. Credits used for log storage in the Continuous and Frequent Tiers. -* **Promotional categories**. For more information, see [Promotional Credits](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#promotional-credits). +* **Promotional categories**. For more information, see [Promotional credits](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#promotional-credits). -By clicking the **Download Report** button, you can download the org usage data in csv format for further analysis and reporting. You can also download the detailed child org usage data in csv format by clicking **Download Detailed Credit Usages** from the kebab icon next to the Download Report button. +By clicking the **Download Report** button, you can download the org usage data in csv format for further analysis and reporting. You can also download the detailed child org usage data in csv format by clicking **Download Detailed Credit Usages** from the kebab icon next to the **Download Report** button. your description @@ -162,7 +162,7 @@ By clicking the **Download Report** button, you can download the selected child ## Audit logging for organizations -This section has examples of the messages Sumo Logic writes to the Audit Event Index when you create or update an org. +This section has examples of the messages Sumo Logic writes to the [Audit Event Index](/docs/manage/security/audit-indexes/audit-event-index/) when you create or update an org. ### OrganizationCreated diff --git a/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-service-providers.md b/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-service-providers.md index 6af750f3c7..ee34f25a71 100644 --- a/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-service-providers.md +++ b/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-service-providers.md @@ -2,7 +2,7 @@ id: create-manage-orgs-service-providers title: Create and Manage Organizations (Service Providers) sidebar_label: Service Providers -description: For Sumo Logic Service Providers, Sumo Orgs eases the process of provisioning and managing POV Trial orgs in multiple Sumo Logic deployments. +description: For Sumo Logic service providers, using Sumo Logic organitztions eases the process of provisioning and managing POV Trial orgs in multiple Sumo Logic deployments. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -15,27 +15,27 @@ import Iframe from 'react-iframe'; | Credits | Enterprise Operations, Enterprise Security, Enterprise Suite | :::note -This feature is not enabled by default. If you’d like to have it enabled, contact your Sumo Logic Account Executive. +This feature is not enabled by default. If you’d like to have it enabled, contact your Sumo Logic account executive. ::: -This topic has information about Sumo Logic’s Organizations (“Sumo Orgs”) feature for Sumo Logic Service Providers. Sumo Orgs allows you to logically group, provision, and centrally manage and monitor the credits usage of multiple orgs. We use the term *parent org* to refer to the org from which you create and manage orgs, and *child orgs* to refer to the orgs you create.  +This topic has information about Sumo Logic’s organizations feature for Sumo Logic service providers. Sumo Logic organizations allow you to logically group, provision, and centrally manage and monitor the credits usage of multiple orgs. We use the term *parent org* to refer to the org from which you create and manage orgs, and *child orgs* to refer to the orgs you create.  -As a Service Provider, you can create two types of child orgs: +As a service provider, you can create two types of child orgs: * You can create POV (Proof of Value) Trial orgs for your prospects to access in order to evaluate Sumo Logic. For more information, see [About POV Trial orgs](#about-pov-trial-orgs). * You can create child orgs, either for use within your own company or for customers who are not going to trial Sumo Logic before subscribing. In either case, the child orgs you create will be the same plan type as the parent org. For example, if you have an Enterprise Suite plan, the child orgs you create will also be Enterprise Suite accounts.  -When you create a child org, you provision it with credits, based on the ingest volume you estimate for the org. We refer to the different flavors of ingest—Continuous Log Ingest, Frequent Log Ingest, and so on—as *product variables*. When you provision a child org you use a Credits Calculator to estimate and allocate required credits for each product variable. +When you create a child org, you provision it with credits, based on the ingest volume you estimate for the org. We refer to the different flavors of ingest—Continuous Log Ingest, Frequent Log Ingest, and so on—as *product variables*. When you provision a child org you use a credits calculator to estimate and allocate required credits for each product variable. -We refer to your estimates of ingest capacity required for each product variable as *baselines*. Sumo Logic’s throttling multipliers for logs and metrics are based on these estimates. (For example, if you estimate 1GB usage for logs and specify that as the "baseline" when you create the org, Sumo Logic will start [throttling](/docs/manage/ingestion-volume/log-ingestion.md) when ingestion to the org reaches 4 to 10 times the baseline. The multiplier depends on your account size.) +We refer to your estimates of ingest capacity required for each product variable as *baselines*. Sumo Logic’s throttling multipliers for logs and metrics are based on these estimates. (For example, if you estimate 1GB usage for logs and specify that as the "baseline" when you create the org, Sumo Logic will start [throttling](/docs/manage/ingestion-volume/log-ingestion/#log-throttling) when ingestion to the org reaches 4 to 10 times the baseline. The multiplier depends on your account size.) Users that have the required role capabilities can create child orgs under a parent org, and manage and monitor the allocation and consumption of Sumo Logic credits across orgs, and for each child org. This functionality is available in the Sumo Logic UI in the **Organizations** tab and also in the [Organizations Management API](https://organizations.sumologic.com/docs/). ## About POV Trial orgs -POV Trial orgs you create will have a 45 day trial period. POV Trial orgs will be provisioned with the following ingestion limits. +POV Trial orgs you create will have a 45 day trial period. POV Trial orgs will be provisioned with the following ingestion limits: * 5 GB Continuous Tier ingest * 5 GB Frequent Tier ingest @@ -48,12 +48,12 @@ POV Trial orgs you create will have a 45 day trial period. POV Trial orgs will b The credits associated with the trial org allocations don’t impact the parent org’s credits allocation. ::: -You can upgrade a trial org by editing the org and changing the Plan Type. When you upgrade a POV Trial org, if the org is in a different Sumo Logic deployment from the parent org, the Credits calculator may add a deployment factor, which is a percentage upcharge that varies by deployment. +You can upgrade a trial org by editing the org and changing the Plan Type. When you upgrade a POV Trial org, if the org is in a different Sumo Logic deployment from the parent org, the credits calculator may add a deployment factor, which is a percentage upcharge that varies by deployment. The plan gets downgraded to [**Free** plan with limitations](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#credits---account-types) once the 45-day POV trial period is over. To prevent this from happening, Sumo Logic provides indicators about the expiration date in two different places on the platform. -* **View details for the selected organizations**. You can see the **Plan Expiry** date with information about the downgrading of your plan in the view details side panel for the selected organizations. This tooltip is different for CSV and Non-CSV provisioned child org. -* **Child org table**. If the number of days left for expiry is less than 14, a warning icon with the expiry date will be shown against the respective organizations, and the information about downgrading your plan will turn into a warning with the same message. +* **View details for the selected organizations**. You can see the **Plan Expiry** date with information about the downgrading of your plan in the view details side panel for the selected organizations. This tooltip is different for CSV and non-CSV provisioned child orgs. +* **Child org table**. If the number of days left for expiration is less than 14, a warning icon with the expiration date will be shown against the respective organizations, and the information about downgrading your plan will turn into a warning with the same message. :::info If your CSE POV trial plan is downgraded to the free plan, your CSE access will be disabled and your data will be erased after seven days. @@ -73,7 +73,7 @@ After you create a child org, you can’t delete it. 1. Click **+ Add Organization**.
orgs-page 1. The **Create New Organization** pane appears.
orgs-page -### Allocate Credits +### Allocate credits 1. **Plan Type**. Select your organization's plan type. By default, “POV Trial (45 days)” is selected. Leave it selected. 1. **Deployment**. Select a Sumo Logic deployment from the list. @@ -91,7 +91,7 @@ After you create a child org, you can’t delete it. * **CSE Log Ingest**. Estimated daily Cloud SIEM ingestion. 1. Click **Done** to go back. -### Basic Details +### Basic details 1. **Organization Name**. Enter a unique name for the org. 1. **Account Owner Email**. Enter the name of the account owner. @@ -107,7 +107,7 @@ After you create a child org, you can’t delete it. 1. In the edit pane, choose the **Enterprise plan** that appears as an option in the **Plan Type** dropdown.
modify 1. A warning message is shown that says you won’t be able to downgrade the org once you upgrade it.
modify 1. Click **Set Baseline**. -1. The Credits Calculator appears.
calculator +1. The credits calculator appears.
calculator * **Continuous Log Ingest**. Enter estimated daily ingestion to the Continuous Tier. * **Frequent Log Ingest**. Enter estimated daily ingestion to the Frequent Tier. * **Infrequent Log Ingest**. Enter estimated daily ingestion to the Infrequent Tier. @@ -115,7 +115,7 @@ After you create a child org, you can’t delete it. * **Tracing**. Enter estimated daily ingestion of traces. 1. **Cloud SIEM Enterprise**. Click the checkbox to enable Cloud SIEM. When the **Cloud Log Ingest** field appears, enter a value in GB. :::note - Provisioning Cloud SIEM can take up to 24 hours. See [Monitor Cloud SIEM Provisioning](#monitor-cloud-siem-provisioning), below. + Provisioning Cloud SIEM can take up to 24 hours. See [Monitor Cloud SIEM provisioning](#monitor-cloud-siem-provisioning), below. ::: 1. As you enter the ingestion estimates, the number of credits required for the ingestion levels is incremented. 1. The calculator now shows the recommended credit allocation, which provides you a suggestion on how many credits you would need for the child org. This is calculated based on the baseline added, the burndowns in your contract, and the days remaining in your contract. @@ -138,7 +138,7 @@ If a POV trial org is not upgraded to Enterprise plan after 45 days, the org wil * **Tracing**. Enter estimated daily ingestion of traces. 1. **Cloud SIEM Enterprise**. Click the checkbox to enable Cloud SIEM. When the **Cloud Log Ingest** field appears, enter a value in GB. :::note - Provisioning Cloud SIEM can take up to 24 hours. See [Monitor Cloud SIEM Provisioning](#monitor-cloud-siem-provisioning), below. + Provisioning Cloud SIEM can take up to 24 hours. See [Monitor Cloud SIEM provisioning](#monitor-cloud-siem-provisioning), below. ::: 1. As you enter the ingestion estimates, the number of credits required for the specified ingestion levels will be incremented. 1. The calculator now shows the recommended credit allocation, which provides you a suggestion on how many credits you would need for the child org. This is calculated based on the baseline added, the burndowns in your contract, and the days remaining in your contract. @@ -183,7 +183,7 @@ To change an org's credits allocation: **Example 2**: Consider you need to increase credits to your child org. The below image shows that you have used 35 credits out of 31026 credits allocated to your child org. Now, if you wish to increase the credits to your child org, select **Credits to be Added** and add the additional credits required.
modify-allocation-recommendatio-example-2 1. If you want to modify the baseline, click **View Baseline**. The **Credits Calculator** appears. - 1. Click **Edit** and follow the steps in [Allocate Credits](#allocate-credits) to update the credits allocation.
edit-baseline + 1. Click **Edit** and follow the steps in [Allocate credits](#allocate-credits) to update the credits allocation.
edit-baseline 1. Once you save the new baseline, you can view the recommended value in the **Credits to be Added/Reduced** section.
baseline-credits-to-add 1. (Optional) Click **View Details** to view the detailed breakdown of additional credits required value.
baseline-view-details 1. Click **Save** once you finish editing the credit values. @@ -223,9 +223,9 @@ You can view the aggregate usage for all child orgs across usage category in the * **Storage**. Credits used for log storage in the Continuous and Frequent Tiers. * **Cloud SIEM Ingest**. Credits used for logs ingested into Cloud SIEM. * **Infrequent Storage**. Credits used for log storage in the Infrequent Tier. -* **Promotional categories**. For more information, see [Promotional Credits](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#promotional-credits). +* **Promotional categories**. For more information, see [Promotional credits](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#promotional-credits). -By clicking the **Download Report** button, you can download the org usage data in csv format for further analysis and reporting. You can also download the detailed child org usage data in csv format by clicking **Download Detailed Credit Usages** from the kebab icon next to the Download Report button. +By clicking the **Download Report** button, you can download the org usage data in csv format for further analysis and reporting. You can also download the detailed child org usage data in csv format by clicking **Download Detailed Credit Usages** from the kebab icon next to the **Download Report** button. your description @@ -261,7 +261,7 @@ By clicking the **Download Report** button, you can download the selected child ## Audit logging for organizations -This section has examples of the messages Sumo Logic writes to the Audit Event Index when you create, deactivate, and update an org.   +This section has examples of the messages Sumo Logic writes to the [Audit Event Index](/docs/manage/security/audit-indexes/audit-event-index/) when you create, deactivate, and update an org.   ### OrganizationCreated diff --git a/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs.md b/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs.md index 852c377fde..d9515cf894 100644 --- a/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs.md +++ b/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs.md @@ -9,7 +9,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; import Iframe from 'react-iframe'; :::note -If you are a Sumo Logic Service Provider, see [Create and Manage Orgs (Service Providers)](create-manage-orgs-service-providers.md). +If you are a Sumo Logic Service Provider, see [Create and Manage Orgs (Service Providers)](/docs/manage/manage-subscription/create-and-manage-orgs/create-manage-orgs-service-providers/). ::: ## Availability @@ -19,16 +19,16 @@ If you are a Sumo Logic Service Provider, see [Create and Manage Orgs (Service P | Credits | Enterprise Operations, Enterprise Security, Enterprise Suite | :::note -This feature is not enabled by default. If you’d like to have it enabled, contact your Sumo Logic Account Executive. +This feature is not enabled by default. If you’d like to have it enabled, contact your Sumo Logic account executive. ::: -This topic has information about Sumo Logic’s Organizations (“Sumo Orgs”) feature, which you can use to create and manage orgs. We use the term *parent org* to refer to the org from which you create a new org, and *child orgs* to refer to the orgs you create.  +This topic has information about Sumo Logic’s organizations feature, which you can use to create and manage orgs. We use the term *parent org* to refer to the org from which you create a new org, and *child orgs* to refer to the orgs you create.  -Sumo Orgs allows you to logically group, provision, and centrally manage and monitor the credits usage of multiple orgs. +Sumo Logic organizations allow you to logically group, provision, and centrally manage and monitor the credits usage of multiple orgs. When you create a child org, you provision it with credits, based on the ingest volume you estimate for the org. We refer to the different flavors of ingest—Continuous Log Ingest, Frequent Log Ingest, and so on—as *product variables*. When you provision a child org you use a Credits Calculator to estimate and allocate required credits for each product variable.  -We refer to your estimates of ingest capacity required for each product variable as *baselines*. Sumo Logic’s throttling multipliers for logs and metrics are based on these estimates. For example, if you estimate 1GB usage for logs and specify that as the baseline when you create the org, Sumo Logic will start [throttling](/docs/manage/ingestion-volume/log-ingestion.md) when ingestion to the org reaches 4 to 10 times the baseline. The multiplier depends on your account size. +We refer to your estimates of ingest capacity required for each product variable as *baselines*. Sumo Logic’s throttling multipliers for logs and metrics are based on these estimates. For example, if you estimate 1GB usage for logs and specify that as the baseline when you create the org, Sumo Logic will start [throttling](/docs/manage/ingestion-volume/log-ingestion/#log-throttling) when ingestion to the org reaches 4 to 10 times the baseline. The multiplier depends on your account size. Users that have the required role capabilities (described in the following section) can create child orgs under a parent org, and manage and monitor the allocation and consumption of Sumo Logic credits across orgs, and for each child org. This functionality is available in the Sumo Logic UI in the **Organizations** tab and also in the [Organizations Management API](https://organizations.sumologic.com/docs/). @@ -167,9 +167,9 @@ You can view the aggreagte usage for all child orgs across usage category in the * **Storage**. Credits used for log storage in the Continuous and Frequent Tiers. * **Cloud SIEM Ingest**. Credits used for logs ingested into Cloud SIEM. * **Infrequent Storage**. Credits used for log storage in the Infrequent Tier. -* **Promotional categories**. For more information, see [Promotional Credits](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#promotional-credits). +* **Promotional categories**. For more information, see [Promotional credits](/docs/manage/manage-subscription/sumo-logic-credits-accounts/#promotional-credits). -By clicking the **Download Report** button, you can download the org usage data in csv format for further analysis and reporting. You can also download the detailed child org usage data in csv format by clicking **Download Detailed Credit Usages** from the kebab icon next to the Download Report button. +By clicking the **Download Report** button, you can download the org usage data in csv format for further analysis and reporting. You can also download the detailed child org usage data in csv format by clicking **Download Detailed Credit Usages** from the kebab icon next to the **Download Report** button. your description @@ -205,7 +205,7 @@ By clicking the **Download Report** button, you can download the selected child ## Audit logging for organizations -This section has examples of the messages Sumo Logic writes to the Audit Event Index when you create or update an org. +This section has examples of the messages Sumo Logic writes to the [Audit Event Index](/docs/manage/security/audit-indexes/audit-event-index/) when you create or update an org. ### OrganizationCreated diff --git a/docs/manage/manage-subscription/create-and-manage-orgs/index.md b/docs/manage/manage-subscription/create-and-manage-orgs/index.md index d634b66385..e540bcddf4 100644 --- a/docs/manage/manage-subscription/create-and-manage-orgs/index.md +++ b/docs/manage/manage-subscription/create-and-manage-orgs/index.md @@ -10,7 +10,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; ## Requirements for creating and managing orgs -There are several [role capabilities](/docs/manage/users-roles/roles/role-capabilities) that are required to work with orgs: +There are several [role capabilities](/docs/manage/users-roles/roles/role-capabilities/#organizations) that are required to work with orgs: * **View Organizations**. This capability is required to view the Organizations UI. * **Create Organizations**. This capability is required to create or provision child organizations. @@ -26,7 +26,7 @@ In this section, we'll introduce the following concepts:
icon

Create and Manage Orgs

-

Learn how to create and manage multiple Sumo Logic Orgs.

+

Learn how to create and manage multiple Sumo Logic orgs.

@@ -38,7 +38,7 @@ In this section, we'll introduce the following concepts:
icon

Create and Manage Orgs (Flex)

-

Learn how to create and manage multiple Sumo Logic Orgs with Flex data.

+

Learn how to create and manage multiple Sumo Logic orgs with Flex data.

@@ -151,7 +151,3 @@ Follow the below steps to delink the deactivated child org: - Only **parent-org** users with Manage Child Orgs capability can initiate a deletion workflow. - Compatible with Enterprise, Trial/PoV, and Free-Forever child orgs. - - - - diff --git a/docs/manage/manage-subscription/create-and-manage-orgs/manage-org-settings.md b/docs/manage/manage-subscription/create-and-manage-orgs/manage-org-settings.md index d7bb11857c..1ee807cac1 100644 --- a/docs/manage/manage-subscription/create-and-manage-orgs/manage-org-settings.md +++ b/docs/manage/manage-subscription/create-and-manage-orgs/manage-org-settings.md @@ -52,7 +52,7 @@ After you make this change, you will not be able to edit the account owner. 1. click **Change Account Owner**.
Change_account_owner_prompt.png :::note -If the account owner leaves your organization and you cannot transfer the account ownership, please [submit a support ticket](https://support.sumologic.com/support/s) to transfer the account ownership. +If the account owner leaves your organization and you cannot transfer the account ownership, [submit a support ticket](https://support.sumologic.com/support/s) to transfer the account ownership. ::: ### Delete an organization @@ -74,7 +74,7 @@ By default, your Sumo Logic account has a "service" subdomain. For example, `se If you have multiple Sumo Logic accounts, you may find it useful to configure a custom subdomain for each of your Sumo Logic accounts. -Custom subdomains can help ensure that requests are authenticated to the right account when links are received. Once configured by your account owner, your custom subdomain will be used in the links Sumo generates when you share queries or dashboards, or the links in alerts and other emails you may receive from your account. These subdomain-enabled links will direct the user to the specified account for authentication. +Custom subdomains can help ensure that requests are authenticated to the right account when links are received. Once configured by your account owner, your custom subdomain will be used in the links Sumo Logic generates when you share queries or dashboards, or the links in alerts and other emails you may receive from your account. These subdomain-enabled links will direct the user to the specified account for authentication. When you use custom subdomains in combination with SAML integrations [configured with SP initiated login](/docs/manage/security/saml/set-up-saml), your SAML authentication options will be provided within your subdomain-enabled Sumo Logic login page. diff --git a/docs/manage/manage-subscription/create-and-manage-orgs/manage-orgs-for-mssps.md b/docs/manage/manage-subscription/create-and-manage-orgs/manage-orgs-for-mssps.md index b2781cc883..78da9106bc 100644 --- a/docs/manage/manage-subscription/create-and-manage-orgs/manage-orgs-for-mssps.md +++ b/docs/manage/manage-subscription/create-and-manage-orgs/manage-orgs-for-mssps.md @@ -36,7 +36,7 @@ You can push the following: * Cloud SIEM [rule tuning expressions](/docs/cse/rules/rule-tuning-expressions/) * [Library](/docs/get-started/library) * [Monitors](/docs/alerts/monitors/) -* [Source Template](/docs/send-data/opentelemetry-collector/remote-management/source-templates/) +* [Source templates](/docs/send-data/opentelemetry-collector/remote-management/source-templates/) 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu, select **Organizations**. You can also click the **Go To...** menu at the top of the screen and select **Organizations**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Administration > Organizations**. 1. Select the **Manage Content** tab. @@ -53,7 +53,7 @@ You can push the following: 1. Click **Push**. A **Pushing in progress** dialog is displayed. 1. (Optional) If required, follow the below steps to stop the push: 1. Click the **Stop Push** button on the dialog box.
stop-push-button - 1. Click **Stop Push** on the confirmation pop-up. To view the results table, refer to [View Results](#view-results).
stop-push-confirmation + 1. Click **Stop Push** on the confirmation pop-up. To view the results table, refer to [View results](#view-results).
stop-push-confirmation ### Tips diff --git a/docs/manage/manage-subscription/manage-billing-information.md b/docs/manage/manage-subscription/manage-billing-information.md index f2f1069c76..d1a7675a56 100644 --- a/docs/manage/manage-subscription/manage-billing-information.md +++ b/docs/manage/manage-subscription/manage-billing-information.md @@ -9,7 +9,7 @@ The **Billing** page allows admin users to add or update the credit card informa All users who have Admin role privileges can manage the billing information for Sumo Logic. This includes the credit card number on file (monthly or annual payment), as well as the billing address and contact information. -Once changes are submitted, Sumo Logic will begin applies the new credit card for the next billing cycle. To reassign the account owner role to another admin user, see Account Page for your account type. +Once changes are submitted, Sumo Logic will begin to apply the new credit card for the next billing cycle. To reassign the account owner role to another admin user, see the Account page for your account type. To modify your billing information: diff --git a/docs/manage/manage-subscription/organization-usage-limits.md b/docs/manage/manage-subscription/organization-usage-limits.md index 3b5b0d9c4a..1ae1a7bc6f 100644 --- a/docs/manage/manage-subscription/organization-usage-limits.md +++ b/docs/manage/manage-subscription/organization-usage-limits.md @@ -11,19 +11,19 @@ This page provides information about the query budget usage limits, which allows ## Ingestion - Throttling Limits :::info -Only **Administrator** have the access to view the **Ingestion - Throttling Limits** section. +Only administrators have the access to view the **Ingestion - Throttling Limits** section. ::: -This section provides information about the baseline and throttling limits set. Click **View Usage and Throttling Limits** button to view the logs, metrics, and traces ingestion rate over the selected time range. With [View Recent Breaches](/docs/manage/security/audit-indexes/audit-index/#throttling-events) button you can view recent throttling limit breaches. +This section provides information about the baseline and throttling limits set. Click the **View Usage and Throttling Limits** button to view the logs, metrics, and traces ingestion rate over the selected time range. With the **View Recent Breaches** button you can view recent [throttling limit breaches](/docs/manage/security/audit-indexes/audit-index/#throttling-events). ### Enable Ingestion Throttling Notifications :::note -Only users with **Administrator** access can enable this feature. +Only users with administrator access can enable this feature. ::: 1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select your username and then **Preferences**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu, select the person silhouette icon and then **Preferences**.
Account Preferences -1. Access your [Preferences](/docs/get-started/account-settings-preferences/#my-preferences). +1. Access your [preferences](/docs/get-started/account-settings-preferences/#my-preferences). 1. Navigate to **My Preferences** and check the **Enable ingestion throttling notifications** checkbox.
enable-ingestion-throttling-notifications ## Availability @@ -42,8 +42,8 @@ To manage the query size limit follow the below steps: :::info Sumo Logic defines scan as two types: - - **Foreground interactive search**. Search page UI, Mobot, and Dashboards. - - **Background search**. API, Scheduled Search, Monitor, Scheduled Views, and SLO. + - **Foreground interactive search**. Search page UI, Mobot, and dashboards. + - **Background search**. API, scheduled search, monitor, scheduled views, and SLO. ::: :::note diff --git a/docs/manage/manage-subscription/scan-budgets.md b/docs/manage/manage-subscription/scan-budgets.md index ffec17c4aa..3d61d01d00 100644 --- a/docs/manage/manage-subscription/scan-budgets.md +++ b/docs/manage/manage-subscription/scan-budgets.md @@ -42,8 +42,8 @@ To create the query size limit using the **Advanced** configuration: - **Only allow background query scans**. A warning message will be displayed if you run a query that exceeds the budget set. This will block the foreground searches but will not impact any background searches/automated queries. :::info Sumo Logic defines scan as two types: - - **Foreground interactive search**. Search page UI, Mobot, and Dashboards. - - **Background search**. API, Scheduled Search, Monitors, Scheduled Views, and SLO. + - **Foreground interactive search**. Search page UI, Mobot, and dashboards. + - **Background search**. API, scheduled search, monitors, scheduled views, and SLO. ::: 1. **Details**. Enter the name for the scan budget.
create-scan-budget 1. Click **Save** to create the scan budget. @@ -70,7 +70,7 @@ To view the selected scan budget: - **Per Query Budget**. Limits the data (in GBs) that a single query can consume. If the query size exceeds the set limit, you will not be able to continue scanning until they are within the query size limit. - **Time phased budgets**. Limits the data (in GBs) that a single user or a group can consume based on the time phase selected while creating the budget. - **Status**. Describes if the scan budget is active or inactive. - - **Usage Category**. Describes the type of scan. For Flex this is shown as **Flex Scan** and for Data tier this is shown as **Infrequent Scan**. + - **Usage Category**. Describes the type of scan. For Flex this is shown as **Flex Scan** and for data tier this is shown as **Infrequent Scan**. - **Scope**. Displays the list of roles or users for whom the selected scan budget is applied for or excluded from. - **Capacity (per user)**. Describes the budget set for individual user search. - **Action when capacity reached**. Describes the type of action sected to notify when the budget limit is reached. diff --git a/docs/manage/manage-subscription/sumo-logic-credits-accounts.md b/docs/manage/manage-subscription/sumo-logic-credits-accounts.md index 9b498be271..7a9238e5a5 100644 --- a/docs/manage/manage-subscription/sumo-logic-credits-accounts.md +++ b/docs/manage/manage-subscription/sumo-logic-credits-accounts.md @@ -8,11 +8,11 @@ description: View information on Sumo Logic Credits accounts and intuitively mon import useBaseUrl from '@docusaurus/useBaseUrl'; import AccountCredit from '../../reuse/account-credit.md'; -Sumo Logic provides flexible account types within its Credits packaging for any size organization. +Sumo Logic provides flexible account types within its credits packaging for any size organization. -This page provides information on the Credits account types and how to monitor and manage your account. +This page provides information on the credits account types and how to monitor and manage your account. :::note This plan was formerly called *Cloud Flex Credits*. @@ -42,7 +42,7 @@ Trial accounts allow full access to all Sumo Logic features to test how Sumo Log - **Retention**: 30 days for all data. - **Users**: Up to 20 users. -Trials are limited to 30 days. If you use up the credits allocated for the trial period before the period ends, Sumo Logic’s [standard throttling mechanism](../ingestion-volume/log-ingestion.md) will be applied to your log ingest. If you need to extend your trial period or request a Proof of Concept (PoC), contact our [sales team](mailto:sales@sumologic.com). +Trials are limited to 30 days. If you use up the credits allocated for the trial period before the period ends, Sumo Logic’s [standard throttling mechanism](/docs/manage/ingestion-volume/log-ingestion/#log-throttling) will be applied to your log ingest. If you need to extend your trial period or request a Proof of Concept (PoC), contact our [sales team](https://support.sumologic.com/support/s/). ### Essentials @@ -54,23 +54,21 @@ For details on upgrading to an Essentials plan or higher, see [Upgrade a Sumo Lo Enterprise Operations accounts are optimized for best practice operational monitoring at any ingest volume. -[Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets), an Enterprise plan feature, control the capacity of daily log ingestion volume sent to Sumo Logic from collectors. It's important to keep track of your daily data usage. For tips on how to monitor and limit the data you're sending to Sumo Logic, see [Log Ingestion](../ingestion-volume/log-ingestion.md). +:::tip +[Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets), a feature of all Enterprise plans, controls the capacity of daily log ingestion volume sent to Sumo Logic from collectors. It's important to keep track of your daily data usage. For tips on how to monitor and limit the data you're sending to Sumo Logic, see [Log Ingestion](/docs/manage/ingestion-volume/log-ingestion). +::: ### Enterprise Security Enterprise Security accounts include advanced security capabilities. Enterprise Security is ideal for security operation centers (SOCs). SOC teams can leverage the latest PCI compliance application frameworks and threat detection capabilities.   -[Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets), an Enterprise plan feature, control the capacity of daily log ingestion volume sent to Sumo Logic from collectors. It's important to keep track of your daily data usage. For tips on how to monitor and limit the data you're sending to Sumo Logic, see [Log Ingestion](../ingestion-volume/log-ingestion.md). - ### Enterprise Suite Enterprise Suite accounts are optimized to address the most advanced data insight challenges. Enterprise Suite accounts include all of Sumo Logic’s industry-leading capabilities including Sumo Logic’s Tiered Analytics. -[Ingest Budgets](/docs/manage/ingestion-volume/ingest-budgets), an Enterprise plan feature, control the capacity of daily log ingestion volume sent to Sumo Logic from collectors. It's important to keep track of your daily data usage. For tips on how to monitor and limit the data you're sending to Sumo Logic, see [Log Ingestion](../ingestion-volume/log-ingestion.md). - ## Features by plan type -The following table provides a summary list of key features by Credits package accounts. +The following table provides a summary list of key features by credits package accounts. | Feature | Free | Trial | Essentials | Enterprise Operations | Enterprise Security | Enterprise Suite | |:-- | :-- | :-- | :-- | :-- | :-- | :-- | @@ -162,7 +160,7 @@ This panel provides detailed analytics and comparisons for credit usage: * **Metrics Ingest**. Credits used for metrics. * **Storage**. Credits for log storage in Continuous and Frequent Tiers. * **Infrequent Storage**. Credits for log storage in the Infrequent Tier. - * **Promotional Credits**. See [Promotional Credits](#promotional-credits). + * **Promotional Credits**. See [Promotional credits](#promotional-credits). * **Usage % Change**. Highlights changes in usage over selected time intervals. * View data by time period (day, week, or month). * Visualize usage with line or column charts. @@ -175,14 +173,14 @@ To analyze usage trends: * Use the pan feature (magnifying glass icon) to scroll through data. * Hover over chart sections for detailed insights. -### Promotional Credits +### Promotional credits -There are times when Sumo Logic promotes services and consumables through the provision of Promotional Credits. Promotional Credits are non-transferrable and auto-expire at the end of the promotion period. In other words, if the Promotional Credits are not used within the promotion period, they do not carry over. They are of a "use it or lose it" nature. Promotional Credits are specific to a promotion and cannot be used for any service. The criteria, including promotion period, are listed in your contract. Promotional Credit consumption is calculated separately from the credits you paid for in your contract period. Promotional Credits are utilized as the priority credit for the specified credit variable. +There are times when Sumo Logic promotes services and consumables through the provision of promotional credits. Promotional credits are non-transferrable and auto-expire at the end of the promotion period. In other words, if the promotional credits are not used within the promotion period, they do not carry over. They are of a "use it or lose it" nature. promotional credits are specific to a promotion and cannot be used for any service. The criteria, including promotion period, are listed in your contract. Promotional credit consumption is calculated separately from the credits you paid for in your contract period. Promotional credits are utilized as the priority credit for the specified credit variable. -If your contract includes 100,000 credits and 10,000 Promotional Credits for "Metrics," the first 10,000 credits used for Metrics will be from Promotional Credits. After depletion, contract credits will be used. +If your contract includes 100,000 credits and 10,000 promotional credits for "Metrics," the first 10,000 credits used for metrics will be from promotional credits. After depletion, contract credits will be used. -To monitor Promotional Credits: -* Deselect all other usage categories in **Usage Categories** to isolate Promotional Credits. +To monitor promotional credits: +* Deselect all other usage categories in **Usage Categories** to isolate promotional credits. * Refine further by deselecting specific credit types (e.g., Continuous Ingest, Storage). -Promotional Credits graphs display the rate of consumption for allocated Promotional Credits. +Promotional credits graphs display the rate of consumption for allocated promotional credits. diff --git a/docs/manage/manage-subscription/sumo-logic-flex-accounts.md b/docs/manage/manage-subscription/sumo-logic-flex-accounts.md index 499ed2a7de..7dec24b282 100644 --- a/docs/manage/manage-subscription/sumo-logic-flex-accounts.md +++ b/docs/manage/manage-subscription/sumo-logic-flex-accounts.md @@ -34,7 +34,7 @@ Free accounts give you access to most Sumo Logic features, with a credit allocat -Trial accounts allow you to try all of Sumo Logic's advanced features to understand how Sumo Logic will fit within your organization before you buy. It includes a credit allocation to support a daily data volume limit of 1 GB per day providing approximately 500GB of search data volume daily or 15TB of search volume, 20 users, and 30 days of data retention. If you use up the credits allocated for the trial period before the period ends, Sumo Logic’s [standard throttling mechanism](/docs/manage/ingestion-volume/log-ingestion) will be applied to your log ingest. +Trial accounts allow you to try all of Sumo Logic's advanced features to understand how Sumo Logic will fit within your organization before you buy. It includes a credit allocation to support a daily data volume limit of 1 GB per day providing approximately 500GB of search data volume daily or 15TB of search volume, 20 users, and 30 days of data retention. If you use up the credits allocated for the trial period before the period ends, Sumo Logic’s [standard throttling mechanism](/docs/manage/ingestion-volume/log-ingestion/#log-throttling) will be applied to your log ingest. Trials are limited to 30 days. If you need to extend your trial period, contact our sales team to inquire about a Proof of Concept (PoC). @@ -192,7 +192,7 @@ This panel provides analytics to monitor and compare usage against contract capa * **Tracing Ingest**. Credits used for tracing. * **Cloud SIEM Ingest**. Credits used for logs in Cloud SIEM. * **Metrics Ingest**. Credits used for metrics. - * **Promotional Categories**. See [Promotional Credits](#promotional-credits). + * **Promotional Categories**. See [Promotional credits](#promotional-credits). * **Usage % Change**. Highlights changes in usage over selected time intervals. * Track credit consumption against the plan’s baseline usage. * Drill into specific time intervals (day, week, or month) for granular insights. @@ -200,13 +200,13 @@ This panel provides analytics to monitor and compare usage against contract capa * Hover over charts for detailed information. * Download usage reports or credit usage data as CSV files. -#### Promotional Credits +#### Promotional credits -There are times when Sumo Logic promotes services and consumables through the provision of Promotional Credits. Promotional Credits are non-transferrable and auto-expire at the end of the promotion period. In other words, if the Promotional Credits are not used within the promotion period, they do not carry over. They are of a "use it or lose it" nature. Promotional Credits are specific to a promotion and cannot be used for any service. The criteria, including promotion period, are listed in your contract. Promotional Credit consumption is calculated separately from the credits you paid for in your contract period. Promotional Credits are utilized as the priority credit for the specified credit variable. +There are times when Sumo Logic promotes services and consumables through the provision of promotional credits. Promotional credits are non-transferrable and auto-expire at the end of the promotion period. In other words, if the promotional credits are not used within the promotion period, they do not carry over. They are of a "use it or lose it" nature. Promotional credits are specific to a promotion and cannot be used for any service. The criteria, including promotion period, are listed in your contract. Promotional credit consumption is calculated separately from the credits you paid for in your contract period. Promotional credits are utilized as the priority credit for the specified credit variable. -For example: If you have 10,000 Promotional Credits for "Metrics" in a 100,000-credit contract, the first 10,000 credits for Metrics will use Promotional Credits before switching to paid credits. +For example: If you have 10,000 promotional credits for "Metrics" in a 100,000-credit contract, the first 10,000 credits for Metrics will use promotional credits before switching to paid credits. -To filter and focus on Promotional Credits: +To filter and focus on promotional credits: * Deselect all other usage categories. * Refine further by deselecting specific credit types (e.g., Continuous Ingest, Storage). diff --git a/docs/manage/manage-subscription/upgrade-account/upgrade-cloud-flex-legacy-account.md b/docs/manage/manage-subscription/upgrade-account/upgrade-cloud-flex-legacy-account.md index 98a6191a48..e355f20465 100644 --- a/docs/manage/manage-subscription/upgrade-account/upgrade-cloud-flex-legacy-account.md +++ b/docs/manage/manage-subscription/upgrade-account/upgrade-cloud-flex-legacy-account.md @@ -10,7 +10,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; We recommend transitioning to a our newer [Flex Plan](/docs/manage/manage-subscription/sumo-logic-flex-accounts/) for the newest features and enhanced functionality. ::: -This page has information about upgrading a Cloud Flex Legacy plan, which has the following [account types](/docs/manage/manage-subscription/upgrade-account/upgrade-cloud-flex-legacy-account): Free, Trial, Professional, and Enterprise. +This page has information about upgrading a Cloud Flex Legacy plan, which has the following [account types](/docs/manage/manage-subscription/cloud-flex-legacy-accounts/#cloud-flex-legacy---account-types): Free, Trial, Professional, and Enterprise. ## Upgrade options for legacy accounts @@ -40,14 +40,14 @@ It depends on your current account type: * **Metrics**. Enter an estimate of the metrics to be ingested daily, in data points per minute (DPM.) 1. **Billing Frequency.** Click the radio button next to **Annually** or **Monthly**.  1. Click **Upgrade**. -1. The page refreshes to display the **Payment Method** step.If you've previously upgraded you may choose to use the existing payment method and click **Next**. +1. The page refreshes to display the **Payment Method** step. If you've previously upgraded you may choose to use the existing payment method and click **Next**. 1. To add a new payment method, click **Use a New Credit Card**, enter the credit card information you'd like Sumo Logic to bill, and click **Submit**. 
New credit card information 1. The page refreshes to show the **Confirm Upgrade** step.
Order summary 1. Read the Service Level Agreements, then click **I have read and agree to the Service Level Agreements** to continue. 1. Click **Confirm** to complete the upgrade. After you click **Confirm**, the credit card you provided to Sumo Logic is charged. 1. The upgrade is processed, then a **Congratulations** screen appears. Click **Finish**. -If you have any issues, or if you do not see a charge on your credit card within 48 hours, contact [support@sumologic.com](mailto:support@sumologic.com). +If you have any issues, or if you do not see a charge on your credit card within 48 hours, [contact Support](https://support.sumologic.com/support/s/). :::note The price shown in the screenshots above may not reflect the actual current price. diff --git a/docs/manage/partitions/data-tiers/create-edit-partition.md b/docs/manage/partitions/data-tiers/create-edit-partition.md index 959a202d5f..942ebd2c56 100644 --- a/docs/manage/partitions/data-tiers/create-edit-partition.md +++ b/docs/manage/partitions/data-tiers/create-edit-partition.md @@ -1,7 +1,7 @@ --- id: create-edit-partition title: Create and Edit a Partition -description: Learn how to create and edit a Partition in an Index. +description: Learn how to create and edit a partition in an index. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -11,58 +11,58 @@ Partitions provide three primary functions: * Enhance searches * Enhance retention options -Partitions ingest your messages in real time, and differ from [Scheduled Views](/docs/manage/scheduled-views), which backfill with aggregate data. Partitions begin building a non-aggregate index from the time the Partition is created and only index data moving forward (from the time of creation). +Partitions ingest your messages in real time, and differ from [Scheduled Views](/docs/manage/scheduled-views), which backfill with aggregate data. Partitions begin building a non-aggregate index from the time the partition is created and only index data moving forward (from the time of creation). See [Partitions](/docs/manage/partitions) for limitations. ## Prerequisites -To create or edit a Partition, you must be an account Administrator or have the [Manage Partitions role capability](/docs/manage/users-roles/roles/role-capabilities). It's important to note that Partitions only affect data generated from the date of their creation onwards; any data predating their establishment is not included. +To create or edit a partition, you must be an account Administrator or have the [Manage Partitions role capability](/docs/manage/users-roles/roles/role-capabilities). It's important to note that partitions only affect data generated from the date of their creation onwards; any data predating their establishment is not included. -## Partitions and Data Tiers +## Partitions and data tiers -If you have a Sumo Logic Enterprise Suite account, you can take advantage of the [Data Tiers](/docs/manage/partitions/data-tiers/) feature, which allows you to choose the tier where the Partition will reside. You select the tier when you configure the Partition.  +If you have a Sumo Logic Enterprise Suite account, you can take advantage of the [data tiers](/docs/manage/partitions/data-tiers/) feature, which allows you to choose the tier where the partition will reside. You select the tier when you configure the partition.  -## Create a Partition +## Create a partition 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu select **Manage Data > Logs > Partitions**. 1. Click **+ Add Partition**. 1. The **Create New Partition** pane appears.
create-new-partition.png -1. **Name**. Enter a name for the Partition. Partitions must be named alphanumerically, with no special characters, with the exception of underscores (`_`) and hyphens (`-`). However, a Partition name cannot start with `sumologic_`, an underscore `_`, or a hyphen (`-`). -1. **Data Tier**. (Enterprise Suite accounts only) Click the radio button for the tier where you want the Partition to reside. -1. **Routing Expression**. Enter a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that matches the data you want to have in the Partition, using [built-in metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) or [custom metadata fields](/docs/manage/fields). If you have an Enterprise Suite account, and are going to assign the Partition to the Infrequent Tier, see the information in the [Assigning Data to a Data Tier](/docs/manage/partitions/data-tiers#assigning-data-to-a-data-tier) section of the [Data Tiers](/docs/manage/partitions/data-tiers/) page. +1. **Name**. Enter a name for the partition. Partitions must be named alphanumerically, with no special characters, with the exception of underscores (`_`) and hyphens (`-`). However, a partition name cannot start with `sumologic_`, an underscore `_`, or a hyphen (`-`). +1. **Data Tier**. (Enterprise Suite accounts only) Click the radio button for the tier where you want the partition to reside. +1. **Routing Expression**. Enter a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that matches the data you want to have in the partition, using [built-in metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) or [custom metadata fields](/docs/manage/fields). If you have an Enterprise Suite account, and are going to assign the partition to the Infrequent Tier, see the information in the [Assigning data to a data tier](/docs/manage/partitions/data-tiers#assigning-data-to-a-data-tier) section of the [Data Tiers](/docs/manage/partitions/data-tiers/) page. :::note - The [`_dataTier`](searching-data-tiers.md) search modifier is not supported in Partition routing expressions. + The [`_dataTier`](searching-data-tiers.md) search modifier is not supported in partition routing expressions. ::: -1. **Retention Period**. Enter the number of days you wish to retain the data in the Partition, or click **Apply the retention period of the Default Continuous Index**. -1. **Data Forwarding**. If you want to forward the data in the Partition to a cloud environment, click **Enable Data Forwarding** and specify the necessary information for the options that appear. For more information, see [Data Forwarding](/docs/manage/data-forwarding). +1. **Retention Period**. Enter the number of days you wish to retain the data in the partition, or click **Apply the retention period of the Default Continuous Index**. +1. **Data Forwarding**. If you want to forward the data in the partition to a cloud environment, click **Enable Data Forwarding** and specify the necessary information for the options that appear. For more information, see [Data Forwarding](/docs/manage/data-forwarding). ### Enhance search and retention -* To learn how to run a search against a Partition, see [Run a Search Against a Partition](/docs/manage/partitions/run-search-against-partition) and [Optimize Your Search with Partitions](/docs/search/optimize-search-partitions.md). +* To learn how to run a search against a partition, see [Run a Search Against a Partition](/docs/manage/partitions/run-search-against-partition) and [Optimize Your Search with Partitions](/docs/search/optimize-search-partitions.md). * To learn about data retention periods and how to modify them, see [Manage Indexes with Variable Retention](/docs/manage/partitions/manage-indexes-variable-retention). ### Best practices for optimum performance When designing partitions, keep the following in mind: -* **Avoid using queries that are subject to change**. In order to benefit from using Partitions, they should be used for long-term message organization. -* **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the Partition, which increases search performance. +* **Avoid using queries that are subject to change**. In order to benefit from using partitions, they should be used for long-term message organization. +* **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the partition, which increases search performance. * **Keep the query flexible**. Use a flexible query, such as `_sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query. -* **Group data together that is most often used together**. For example, create Partitions for categories such as web data, security data, or errors. +* **Group data together that is most often used together**. For example, create partitions for categories such as web data, security data, or errors. * **Group data together that is used by teams**. Partitions are an excellent way to organize messages by role and teams within your organization. -* **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a Partition. Including 90% of the data in your index in a Partition won’t improve search performance. -* **Don’t create overlapping partitions**. With multiple Partitions, messages could be duplicated if you create routing expressions that overlap. For example, if you have the following Partitions, messages for `_sourceCategory=prod/Apache` would be duplicated as they would be stored in both Partitions.  +* **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a partition. Including 90% of the data in your index in a partition won’t improve search performance. +* **Don’t create overlapping partitions**. With multiple partitions, messages could be duplicated if you create routing expressions that overlap. For example, if you have the following partitions, messages for `_sourceCategory=prod/Apache` would be duplicated as they would be stored in both partitions.  * Partition1: `_sourceCategory=prod` * Partition2: `_sourceCategory=*/Apache` -Overlapping data between two or more Partitions will count as additional ingest toward your account's quota. See [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index). +Overlapping data between two or more partitions will count as additional ingest toward your account's quota. See [Data Volume Index](/docs/manage/ingestion-volume/data-volume-index). ## Edit a partition This section has instructions for editing a partition.   -When you create a partition, you specify the Data Tier where the partition will reside, a routing expression that determines what data is stored in the partition, and a retention period. Optionally, you can enable data forwarding of the partition’s data to an S3 bucket.   +When you create a partition, you specify the data tier where the partition will reside, a routing expression that determines what data is stored in the partition, and a retention period. Optionally, you can enable data forwarding of the partition’s data to an S3 bucket.   ### About partition editability @@ -74,8 +74,8 @@ You can make some changes to an existing partition:   By default, Sumo Logic internal partitions like `sumologic_audit_events`, `sumologic_volume`, and so on, have the same retention period as the Default Continuous Index. You can change the retention period for any of these internal partitions as desired. ::: * You can change the data forwarding configuration. -* You cannot change the name of partition, reuse a partition name, or change the target Data Tier.   -* Security partitions can’t be edited. Sumo Logic stores Cloud SIEM Records in seven partitions, one for each [Cloud SIEM Record type](/docs/cse/schema/cse-record-types). The names of the Sumo Logic partitions that contain Cloud SIEM Records begin with the string `sec_record_`. If you have a role that grants you the **View Partitions** capability, you can view the security partitions in the Sumo Logic UI. Note, however, that no user can edit or remove a security partition. +* You cannot change the name of partition, reuse a partition name, or change the target data tier.   +* Security partitions can’t be edited. Sumo Logic stores Cloud SIEM records in seven partitions, one for each [Cloud SIEM record type](/docs/cse/schema/cse-record-types). The names of the Sumo Logic partitions that contain Cloud SIEM records begin with the string `sec_record_`. If you have a role that grants you the **View Partitions** capability, you can view the security partitions in the Sumo Logic UI. Note, however, that no user can edit or remove a security partition. ### Changing a partition's routing expression @@ -87,9 +87,9 @@ Before changing the routing expression for a partition, consider the impact of t 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu select **Manage Data > Logs > Partitions**. 1. To refine the table results, use the **Add a filter** section located above the table. *AND* logic is applied when filtering between different sections, while *OR* logic is applied when filtering within the same section. - :::note - You can see the suggestions only if there are two or more responses for the same column or section. - ::: + :::note + You can see the suggestions only if there are two or more responses for the same column or section. + ::: 1. Click the row with the partition you want to edit. 1. The partition details are displayed on the right side of the page. 1. Click **Edit** to open the pane for editing.
edit-partition-pane.png diff --git a/docs/manage/partitions/data-tiers/index.md b/docs/manage/partitions/data-tiers/index.md index 240df92a21..be808dd005 100644 --- a/docs/manage/partitions/data-tiers/index.md +++ b/docs/manage/partitions/data-tiers/index.md @@ -1,11 +1,11 @@ --- slug: /manage/partitions/data-tiers title: Data Tiers -description: Data Tiers provide the ability to allocate data to different storage tiers based on frequency of access - Continuous, Frequent, and Infrequent. +description: Data tiers provide the ability to allocate data to different storage tiers based on frequency of access - Continuous, Frequent, and Infrequent. --- import useBaseUrl from '@docusaurus/useBaseUrl'; -This page describes Sumo Logic's Data Tiers feature. +This page describes Sumo Logic's data tiers feature. :::tip For related information, see [Data Tiers FAQ](faq.md). @@ -21,25 +21,25 @@ Some use cases require “high touch” data that you need to monitor and analyz Other use cases require much less frequent data analysis. Here, we’re talking about “low touch” data that can be very valuable when you want to mine your data for insights, provide periodic reports, or perform a root cause analysis. These use cases can require frequent or infrequent access to data like development, test, and pre-production logs; debug logs; CDN logs; and network logs. -Sumo Logic’s *Data Tiers* provide a comprehensive solution for all types of data that an organization has, low touch, high touch and everything in between, at an economical price. Data Tiers provide tier-based pricing based on your planned usage of the data you ingest.  +Sumo Logic’s *data tiers* provide a comprehensive solution for all types of data that an organization has, low touch, high touch and everything in between, at an economical price. Data tiers provide tier-based pricing based on your planned usage of the data you ingest.  :::note -Data Tiers must be enabled on your [Cloud Flex Legacy account](/docs/manage/manage-subscription/cloud-flex-legacy-accounts) or [Sumo Logic Credits account](/docs/manage/manage-subscription/sumo-logic-credits-accounts) plan to be able to access this functionality. Infrequent Tier, described below, is only available on Sumo Logic Credits. For more information, contact your Sumo Logic account representative. +Data tiers must be enabled on your [Cloud Flex Legacy account](/docs/manage/manage-subscription/cloud-flex-legacy-accounts) or [Sumo Logic Credits account](/docs/manage/manage-subscription/sumo-logic-credits-accounts) plan to be able to access this functionality. Infrequent Tier, described below, is only available on Sumo Logic Credits. For more information, contact your Sumo Logic account representative. ::: -## Types of Data Tiers  +## Types of data tiers  -Each Sumo Logic Data Tier supports a different use case and provides its own set of features and capabilities:  +Each Sumo Logic data tier supports a different use case and provides its own set of features and capabilities:  * The Continuous Tier is for the data you use to monitor and troubleshoot production applications and to ensure the security of your applications. * The Frequent Tier - available only for Sumo Logic Enterprise Suite plans - is for data you need to frequently access to troubleshoot and investigate issues. For example, you might use the Frequent Tier for development and test data that helps you investigate issues during development. Searching the Frequent Tier is free: it's included in the data ingestion price. * The Infrequent Tier - available only for Sumo Logic Enterprise Suite plans - is for data that is used to troubleshoot intermittent or hard-to-reproduce issues. For example, you might use the Infrequent Tier for debug logs, OS logs, thread dumps, and so on. The Infrequent Tier has a pay-per-search pricing model, and very low ingestion cost. -## Planning your use of Data Tiers  +## Planning your use of data tiers  If you do not specify a data tier, all data ingested into Sumo Logic will go to the Continuous Tier. Only data that goes to a partition can go to the Frequent or Infrequent Tiers. You'll need to configure the target tier for the data in a partition on the **Partition** page. -When planning your use of Data Tiers, it is important to remember the following guidelines: +When planning your use of data tiers, it is important to remember the following guidelines: * The General Index cannot be changed, and it is always in the Continuous Tier. * The tier where you assign your data governs how you can search and analyze the data. The table below shows capabilities that are available in each tier.  @@ -52,7 +52,7 @@ After a partition is created in a given tier, you cannot change its tier. If you ## Feature support by tier -How you can search and use your ingested data varies by the Data Tier it resides in, as described in the following table.  +How you can search and use your ingested data varies by the data tier it resides in, as described in the following table.  | Feature support | Continuous Tier | Frequent Tier | Infrequent Tier | | :-- | :-- | :-- | :-- | @@ -74,17 +74,17 @@ How you can search and use your ingested data varies by the Data Tier it resides * Feature activation is subject to minimum volume and service plan requirements, confirmed at time of transaction. -## Assigning data to a Data Tier +## Assigning data to a data tier -You assign data to a Data Tier at the partition level. When you create a partition, you define a routing expression and select the target tier for the data that matches the routing expression. For instructions, see [Create a Partition](/docs/manage/partitions/data-tiers/create-edit-partition). +You assign data to a data tier at the partition level. When you create a partition, you define a routing expression and select the target tier for the data that matches the routing expression. For instructions, see [Create and Edit a Partition](/docs/manage/partitions/data-tiers/create-edit-partition). -## Searching Data Tiers  +## Searching data tiers  For information about searching data tiers, see [Searching Data Tiers](searching-data-tiers.md). ## Common error messages -This section describes the most common error messages for Data Tiers. +This section describes the most common error messages for data tiers. * If you try to add a panel to a dashboard that uses data from the Frequent or Infrequent Tiers, you'll receive the following error message, because you can only use data from the Continuous Tier in a dashboard: `This query is not supported in Dashboards/Scheduled Searches because it is not in the Continuous Analytics tier. Please modify query and try again.`
Create panel> * If you try to specify the scope of a Scheduled View or a Scheduled Search using a partition in the Frequent or Infrequent Data tiers, you'll receive this error message: `This query is not supported in Dashboards/Scheduled Searches because it is not in the Continuous Analytics tier. Please modify query and try again.` @@ -97,25 +97,25 @@ In this section, we'll introduce the following concepts:
icon

Create and Edit a Partition

-

Learn how to create and edit a Partition in an Index.

+

Learn how to create and edit a partition in an Index.

icon

View Details About a Partition

-

Learn how to view details about a Sumo Logic Partition.

+

Learn how to view details about a Sumo Logic partition.

- icon

Searching Data Tiers

-

Learn how to search specific Data Tiers.

+ icon

Searching data tiers

+

Learn how to search specific data tiers.

icon

Data Tiers FAQs

-

Get answers on various FAQs about Data Tiers.

+

Get answers on various FAQs about data tiers.

diff --git a/docs/manage/partitions/data-tiers/searching-data-tiers.md b/docs/manage/partitions/data-tiers/searching-data-tiers.md index c5c5905894..812c4022b6 100644 --- a/docs/manage/partitions/data-tiers/searching-data-tiers.md +++ b/docs/manage/partitions/data-tiers/searching-data-tiers.md @@ -1,11 +1,11 @@ --- id: searching-data-tiers title: Searching Data Tiers -description: Learn how to search specific Data Tiers. +description: Learn how to search specific data tiers. --- import useBaseUrl from '@docusaurus/useBaseUrl'; -This page has information about how to search different Data Tiers, and when you should use `_dataTier`, a *search modifier* that restricts your search to a single tier.  +This page has information about how to search different data tiers, and when you should use `_dataTier`, a *search modifier* that restricts your search to a single tier.  import Iframe from 'react-iframe'; @@ -27,7 +27,7 @@ import Iframe from 'react-iframe'; ## About the _dataTier search modifier -In Sumo Logic, a search modifier is a tag that gives the Sumo Logic backend information about how to process a query. The `_dataTier` modifier tells Sumo Logic which Data Tier a query should run against: Continuous, Frequent, or Infrequent. +In Sumo Logic, a search modifier is a tag that gives the Sumo Logic backend information about how to process a query. The `_dataTier` modifier tells Sumo Logic which data tier a query should run against: Continuous, Frequent, or Infrequent. :::note Search modifiers are different from Sumo Logic’s [built-in metadata fields](/docs/search/get-started-with-search/search-basics/built-in-metadata), which are key-value pairs that are tagged to incoming log data, and then can be used to find that data easily, later.  @@ -94,7 +94,7 @@ When you query scheduled views, the Sumo Logic Audit Index, or the Sumo Logic Au If you use `_dataTier` to specify a tier other than Continuous in a query of scheduled views or either of the audit indexes, Sumo Logic presents an error message. -### API Support with Rate Limiting +### API support with rate limiting The rate limits described in [Rate limit throttling](/docs/api/search-job/#rate-limit-throttling) apply to cross-tier searches with these concurrent active job limits:  diff --git a/docs/manage/partitions/data-tiers/view-partition-details.md b/docs/manage/partitions/data-tiers/view-partition-details.md index 6a38e7ccd5..79a7587ee9 100644 --- a/docs/manage/partitions/data-tiers/view-partition-details.md +++ b/docs/manage/partitions/data-tiers/view-partition-details.md @@ -6,13 +6,13 @@ description: Learn how to view details about a Sumo Logic partition. import useBaseUrl from '@docusaurus/useBaseUrl'; -To view details about a Partition: +To view details about a partition: 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu select **Manage Data > Logs > Partitions**.
partitions-page -1. Click the row for a Partition to view its details.
view-edit-partition-pane.png +1. Click the row for a partition to view its details.
view-edit-partition-pane.png :::note - The information displayed for partitions that contain Cloud SIEM Records varies from other partitions. You can tell if a partition contains Cloud SIEM Records from its name: The names of the Sumo Logic partitions that contain Cloud SIEM Records begin with the string `sec_record_`. The detailed view for security partitions does not display Data Tier or a routing expression. Note also that you can’t edit a security partition, or configure data forwarding for it. Cloud SIEM users can search security partitions, as described in [Searching for Cloud SIEM Records in Sumo Logic](/docs/cse/records-signals-entities-insights/search-cse-records-in-sumo). + The information displayed for partitions that contain Cloud SIEM records varies from other partitions. You can tell if a partition contains Cloud SIEM records from its name: The names of the Sumo Logic partitions that contain Cloud SIEM records begin with the string `sec_record_`. The detailed view for security partitions does not display Data Tier or a routing expression. Note also that you can’t edit a security partition, or configure data forwarding for it. Cloud SIEM users can search security partitions, as described in [Searching for Cloud SIEM Records in Sumo Logic](/docs/cse/records-signals-entities-insights/search-cse-records-in-sumo). :::   diff --git a/docs/manage/partitions/decommission-partition.md b/docs/manage/partitions/decommission-partition.md index 0f51ab8209..9d4ef5116c 100644 --- a/docs/manage/partitions/decommission-partition.md +++ b/docs/manage/partitions/decommission-partition.md @@ -20,5 +20,5 @@ To decommission a partition: ::: 1. The partition details appear on the right side of the page.
decommision-button 1. Click **Decommission**. -1. In the Confirm dialog, click **OK**. +1. In the confirmation dialog, click **OK**. 1. The partition is decommissioned. diff --git a/docs/manage/partitions/edit-data-forwarding-destinations-partition.md b/docs/manage/partitions/edit-data-forwarding-destinations-partition.md index 76c08cd4a3..a58ae0465a 100644 --- a/docs/manage/partitions/edit-data-forwarding-destinations-partition.md +++ b/docs/manage/partitions/edit-data-forwarding-destinations-partition.md @@ -9,11 +9,11 @@ import useBaseUrl from '@docusaurus/useBaseUrl'; You can specify data forwarding settings for a partition so that the messages that were routed to an index can be forwarded to an existing or new Amazon S3 destination. 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu select **Manage Data > Logs > Partitions**. -1. To refine the table results, use the **Add a filter** section located above the table. *AND* logic is applied when filtering between different sections, while *OR* logic is applied when filtering within the same section. Click the Partition you want to update. +1. To refine the table results, use the **Add a filter** section located above the table. *AND* logic is applied when filtering between different sections, while *OR* logic is applied when filtering within the same section. Click the partition you want to update. :::note You can see the suggestions only if there are two or more responses for the same column or section. ::: partitions-page 1. The partition details are displayed on the right side of the page.
edit-partition-pane-search-icon 1. Click **Edit** to open the pane for editing.
edit-partition-pane.png -1. You can configure Data Forwarding, or if Data Forwarding is already configured, modify the configuration. For more information, see [Forward Data from Sumo Logic to S3 or GCS](../data-forwarding/forward-data-from-sumologic.md). +1. You can configure data forwarding, or if data forwarding is already configured, modify the configuration. For more information, see [Forward Data from Sumo Logic to S3 or GCS](../data-forwarding/forward-data-from-sumologic.md). diff --git a/docs/manage/partitions/faq.md b/docs/manage/partitions/faq.md index 73fb941b2a..75afd19c62 100644 --- a/docs/manage/partitions/faq.md +++ b/docs/manage/partitions/faq.md @@ -21,7 +21,7 @@ For Flex customers: ## How does Sumo Logic decide on which partitions to scan? 1. For any query, the first step is determining the scope of the query. If your query does not explicitly mention the `index/view` clause in the source expression, Sumo Logic will consider all partitions in the default scope. You can override the scope of the query by mentioning the specific `index/view` in the source expression `(_index=partitionA)` or adding other tier partitions in the scope by using `_dataTier` modifier like `_dataTier=Infrequent or _dataTier=All`. -2. Then apply a **[partition selection process](#what-happens-in-the-partition-selection-process)** as mentioned below that helps with the final list of partitions that will scan. +2. Then apply a [partition selection process](#what-happens-in-the-partition-selection-process) as mentioned below that helps with the final list of partitions that will scan. ## What happens in the partition selection process? diff --git a/docs/manage/partitions/flex/create-edit-partition-flex.md b/docs/manage/partitions/flex/create-edit-partition-flex.md index 65a181a39e..ba6c2facd7 100644 --- a/docs/manage/partitions/flex/create-edit-partition-flex.md +++ b/docs/manage/partitions/flex/create-edit-partition-flex.md @@ -1,7 +1,7 @@ --- id: create-edit-partition-flex title: Create and Edit a Partition -description: Learn how to create and edit a Partition in an Index. +description: Learn how to create and edit a partition in an Index. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -11,15 +11,15 @@ Partitions provide three primary functions: * Enhance searches * Enhance retention options -Partitions ingest your messages in real time, and differ from [Scheduled Views](/docs/manage/scheduled-views), which backfill with aggregate data. Partitions begin building a non-aggregate index from the time the Partition is created and only index data moving forward (from the time of creation). +Partitions ingest your messages in real time, and differ from [Scheduled Views](/docs/manage/scheduled-views), which backfill with aggregate data. Partitions begin building a non-aggregate index from the time the partition is created and only index data moving forward (from the time of creation). See [Partitions](/docs/manage/partitions) for limitations. ## Prerequisites -To create or edit a Partition, you must be an account Administrator or have the [Manage Partitions role capability](/docs/manage/users-roles/roles/role-capabilities). It's important to note that Partitions only affect data generated from the date of their creation onwards; any data predating their establishment is not included. +To create or edit a partition, you must be an account Administrator or have the [Manage Partitions role capability](/docs/manage/users-roles/roles/role-capabilities). It's important to note that partitions only affect data generated from the date of their creation onwards; any data predating their establishment is not included.   -## Create a Partition +## Create a partition :::important The search modifier `dataTier` is not supported for Flex queries. @@ -28,31 +28,31 @@ The search modifier `dataTier` is not supported for Flex queries. 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu select **Manage Data > Logs > Partitions**. 1. Click **+ Add Partition**. 1. The **Create New Partition** pane appears.
create-new-partition-flex.png -1. **Name**. Enter a name for the Partition. Partitions must be named alphanumerically, with no special characters, with the exception of underscores (`_`) and hyphens (`-`). However, a Partition name cannot start with `sumologic_`, an underscore `_`, or a hyphen (`-`). +1. **Name**. Enter a name for the partition. Partitions must be named alphanumerically, with no special characters, with the exception of underscores (`_`) and hyphens (`-`). However, a partition name cannot start with `sumologic_`, an underscore `_`, or a hyphen (`-`). 1. (Optional) **Include this partition in default scope**. By default, this checkbox is selected. Deselect this checkbox if you need to exclude this partition from the [default scope in your search](/docs/manage/partitions/flex/faq/#how-can-i-optimize-my-query-using-default-scope). :::note After changing the default scope of a partition, expect a delay of 2 to 3 minutes to reflect the change in the query scope. ::: -1. **Routing Expression**. Enter a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that matches the data you want to have in the Partition, using [built-in metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) or [custom metadata fields](/docs/manage/fields).  -1. **Retention Period**. Enter the number of days you wish to retain the data in the Partition, or click **Apply the retention period of sumologic_default**. +1. **Routing Expression**. Enter a [keyword search expression](/docs/search/get-started-with-search/build-search/keyword-search-expressions.md) that matches the data you want to have in the partition, using [built-in metadata](/docs/search/get-started-with-search/search-basics/built-in-metadata) or [custom metadata fields](/docs/manage/fields).  +1. **Retention Period**. Enter the number of days you wish to retain the data in the partition, or click **Apply the retention period of sumologic_default**. 1. **Compliance data**. Click the **Mark as compliance data** to not change the routing expression and the retention period for partitions. -1. **Data Forwarding**. If you want to forward the data in the Partition to a cloud environment, click **Enable Data Forwarding** and specify the necessary information for the options that appear. For more information, see [Data Forwarding](/docs/manage/data-forwarding). +1. **Data Forwarding**. If you want to forward the data in the partition to a cloud environment, click **Enable Data Forwarding** and specify the necessary information for the options that appear. For more information, see [Data Forwarding](/docs/manage/data-forwarding). ### Enhance search and retention -* To learn how to run a search against a Partition, see [Run a Search Against a Partition](/docs/manage/partitions/run-search-against-partition) and [Optimize Your Search with Partitions](/docs/search/optimize-search-partitions.md). +* To learn how to run a search against a partition, see [Run a Search Against a Partition](/docs/manage/partitions/run-search-against-partition) and [Optimize Your Search with Partitions](/docs/search/optimize-search-partitions.md). * To learn about data retention periods and how to modify them, see [Manage Indexes with Variable Retention](/docs/manage/partitions/manage-indexes-variable-retention). ### Best practices for optimum performance When designing partitions, keep the following in mind: -* **Avoid using queries that are subject to change**. In order to benefit from using Partitions, they should be used for long-term message organization. -* **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the Partition, which increases search performance. +* **Avoid using queries that are subject to change**. In order to benefit from using partitions, they should be used for long-term message organization. +* **Make the query as specific as possible**. Making the query specific will reduce the amount of data in the partition, which increases search performance. * **Keep the query flexible**. Use a flexible query, such as `_sourceCategory=*Apache*`, so that metadata can be adjusted without breaking the query. -* **Group data together that is most often used together**. For example, create Partitions for categories such as web data, security data, or errors. +* **Group data together that is most often used together**. For example, create partitions for categories such as web data, security data, or errors. * **Group data together that is used by teams**. Partitions are an excellent way to organize messages by role and teams within your organization. -* **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a Partition. Including 90% of the data in your index in a Partition won’t improve search performance. -* **Don’t create overlapping partitions**. With multiple Partitions, messages could be duplicated if you create routing expressions that overlap. For example, if you have the following Partitions, messages for `_sourceCategory=prod/Apache` would be duplicated as they would be stored in both Partitions.  +* **Avoid including too much data in your partition**. Send between 2% and 20% of your data to a partition. Including 90% of the data in your index in a partition won’t improve search performance. +* **Don’t create overlapping partitions**. With multiple partitions, messages could be duplicated if you create routing expressions that overlap. For example, if you have the following partitions, messages for `_sourceCategory=prod/Apache` would be duplicated as they would be stored in both partitions.  * Partition1: `_sourceCategory=prod` * Partition2: `_sourceCategory=*/Apache` @@ -76,15 +76,15 @@ You can make some changes to an existing partition:   * You can change the data forwarding configuration. * You cannot change the name of a partition or reuse a partition name. * You cannot edit the audit index partition to include it in the default scope. -* Security partitions can’t be edited. Sumo Logic stores Cloud SIEM Records in seven partitions, one for each [Cloud SIEM Record type](/docs/cse/schema/cse-record-types). The names of the Sumo Logic partitions that contain Cloud SIEM Records begin with the string `sec_record_`. If you have a role that grants you the **View Partitions** capability, you can view the security partitions in the Sumo Logic UI. Note, however, that no user can edit or remove a security partition. +* Security partitions can’t be edited. Sumo Logic stores Cloud SIEM records in seven partitions, one for each [Cloud SIEM record type](/docs/cse/schema/cse-record-types). The names of the Sumo Logic partitions that contain Cloud SIEM records begin with the string `sec_record_`. If you have a role that grants you the **View Partitions** capability, you can view the security partitions in the Sumo Logic UI. Note, however, that no user can edit or remove a security partition. ### How to edit a partition 1. [**New UI**](/docs/get-started/sumo-logic-ui). In the main Sumo Logic menu select **Data Management**, and then under **Logs** select **Partitions**. You can also click the **Go To...** menu at the top of the screen and select **Partitions**.
[**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu select **Manage Data > Logs > Partitions**. 1. To refine the table results, use the **Add a filter** section located above the table. *AND* logic is applied when filtering between different sections, while *OR* logic is applied when filtering within the same section. - :::note - You can see the suggestions only if there are two or more responses for the same column or section. - ::: + :::note + You can see the suggestions only if there are two or more responses for the same column or section. + ::: 1. Click the row with the partition you want to edit. 1. The partition details are displayed on the right side of the page. 1. Click **Edit** to open the pane for editing.
edit-partition-pane-flex.png diff --git a/docs/manage/partitions/flex/index.md b/docs/manage/partitions/flex/index.md index fbbb5dcc95..f9fc7d1585 100644 --- a/docs/manage/partitions/flex/index.md +++ b/docs/manage/partitions/flex/index.md @@ -7,7 +7,7 @@ description: Learn about Sumo Logic Flex Pricing. import useBaseUrl from '@docusaurus/useBaseUrl'; import Iframe from 'react-iframe'; -Flex Pricing delivers a new financial model for log management in which you can centralize, store, and analyze all application, infrastructure, and security data in one place. This drives collaboration and velocity while delivering a reliable and secure digital experience. Here's how it works: +Flex Pricing delivers a new financial model for log management in which you can centralize, store, and analyze all application, infrastructure, and security data in one place. This drives collaboration and velocity while delivering a reliable and secure digital experience.