diff --git a/source/includes/images/industry-solutions/Writing Fig1.svg b/source/includes/images/industry-solutions/Writing Fig1.svg
new file mode 100644
index 00000000..621a8fe6
--- /dev/null
+++ b/source/includes/images/industry-solutions/Writing Fig1.svg
@@ -0,0 +1,171 @@
+
+
\ No newline at end of file
diff --git a/source/includes/images/industry-solutions/Writing Fig2.svg b/source/includes/images/industry-solutions/Writing Fig2.svg
new file mode 100644
index 00000000..cacd6c7b
--- /dev/null
+++ b/source/includes/images/industry-solutions/Writing Fig2.svg
@@ -0,0 +1,874 @@
+
+
diff --git a/source/includes/images/industry-solutions/Writing Fig3.svg b/source/includes/images/industry-solutions/Writing Fig3.svg
new file mode 100644
index 00000000..087af0be
--- /dev/null
+++ b/source/includes/images/industry-solutions/Writing Fig3.svg
@@ -0,0 +1,3 @@
+
\ No newline at end of file
diff --git a/source/includes/images/industry-solutions/Writing Fig4.svg b/source/includes/images/industry-solutions/Writing Fig4.svg
new file mode 100644
index 00000000..33eb08d8
--- /dev/null
+++ b/source/includes/images/industry-solutions/Writing Fig4.svg
@@ -0,0 +1,3 @@
+
\ No newline at end of file
diff --git a/source/solutions-library.txt b/source/solutions-library.txt
index 4581b3bb..b6e6c133 100644
--- a/source/solutions-library.txt
+++ b/source/solutions-library.txt
@@ -107,8 +107,21 @@ kick-start their projects.
capabilities by integrating MongoDB Atlas Vector Search,
Superduper.io, and LLMs.
- .. App-Driven Analytics
- .. --------------------
+ App-Driven Analytics
+ --------------------
+
+ .. card-group::
+ :columns: 2
+ :style: extra-compact
+
+ .. card::
+ :headline: Automating digital underwriting with Machine Learning
+ :url: https://deploy-preview-218--docs-atlas-architecture.netlify.app/solutions-library/automating-digital-underwriting/
+ :icon: mdb_vector_search
+ :icon-alt: Atlas mdb_vector_search icon
+
+ Leverage Machine Learning with real-time data processing and
+ automate digital underwriting
.. tab:: Media
:tabid: media
diff --git a/source/solutions-library/automating-digital-underwriting.txt b/source/solutions-library/automating-digital-underwriting.txt
new file mode 100644
index 00000000..3c811e12
--- /dev/null
+++ b/source/solutions-library/automating-digital-underwriting.txt
@@ -0,0 +1,212 @@
+.. _arch-center-is-digital-underwriting-machinelearning-solution:
+
+=====================================================
+Automating Digital Underwriting with Machine Learning
+=====================================================
+
+.. facet::
+ :name: genre
+ :values: tutorial
+
+.. contents:: On this page
+ :local:
+ :backlinks: none
+ :depth: 1
+ :class: singlecol
+
+Leverage Machine Learning with real-time data processing and automate digital underwriting.
+
+**Use cases:** `Gen AI `__,
+`Analytics `__
+
+**Industries:** `Insurance `__,
+`Financial Services `__,
+`Healthcare `__
+
+**Products and tools:** `Time Series `__,
+`Atlas App Services `__,
+`Atlas Triggers `__,
+`Atlas Functions `__,
+`Atlas Charts `__,
+`Spark Connector `__
+
+**Partners:** `Databricks `__
+
+Solution Overview
+-----------------
+
+Imagine being able to offer your customers personalized, usage-based
+premiums that take into account their driving habits and behavior. To do
+this, you'll need to gather data from connected vehicles, send it to a
+machine learning platform for analysis, and then use the results to
+create a personalized premium for your customers. You’ll also want to
+visualize the data to identify trends and gain insights. This unique,
+tailored approach will give your customers greater control over their
+insurance costs while helping you to provide more accurate and fair
+pricing.
+
+In the GitHub repo, you will find detailed, step-by-step instructions on
+how to build the data upload and transformation pipeline leveraging
+MongoDB Atlas platform features, as well as how to generate, send, and
+process events to and from Databricks.
+
+**By the end of this demo, you’ll have created a data visualization with
+Atlas Charts that tracks the changes of automated insurance premiums in
+near real-time:**
+
+.. video:: https://www.youtube.com/watch?v=91WlXYEUEkk
+
+Other Applicable Industries and Use Cases
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**Financial Services**: Banks and financial institutions must be able to
+make sense of time-stamped financial transactions for trading, fraud
+detection, and more.
+
+**Retail**: Real-time insights into what’s going on right now.
+
+**Healthcare**: From the modes of transportation to the packages
+themselves, IoT sensors enable supply chain optimization while
+in-transit and on-site.
+
+Reference Architecture
+----------------------
+
+.. figure:: /includes/images/industry-solutions/Writing Fig1.svg
+ :figwidth: 1200px
+ :alt: An illustration shows a reference architecture
+
+ Figure 1: Reference Architecture With MongoDB
+
+Data Model Approach
+-------------------
+
+A basic example data model to support this use case would include
+customers, the trips they take, the policies they purchase, and the
+vehicles insured by those policies.
+
+This example builds out three MongoDB collections, as well two
+materialized views. The full Hackloade data model which defines all the
+MongoDB objects within this example can be found on GitHub.
+
+.. figure:: /includes/images/industry-solutions/Writing Fig2.svg
+ :figwidth: 1200px
+ :alt: An illustration shows the MongoDB Data model approach
+
+ Figure 2: MongoDB Data model approach
+
+Building the Solution
+---------------------
+
+A dataset including the total distance driven in car journeys is loaded
+into MongoDB. A daily cron job runs every day at midnight, summarizing
+the daily trips and compiling them into a document stored in a new
+collection called "CustomerTripDaily." A monthly cron job runs on the
+25th day of each month, aggregating the daily documents and creating a
+new collection called "CustomerTripMonthly." Every time a new monthly
+summary is created, an Atlas function posts the total distance for the
+month and baseline premium to Databricks for ML prediction. The ML
+prediction is then sent back to MongoDB and added to the
+"CustomerTripMonthly" document. As a final step, you can visualize all
+of your data with MongoDB Charts.
+
+.. procedure::
+ :style: normal
+
+ .. step:: Creating a data processing pipeline with a materialized view
+
+ The data processing pipeline component of this example consists of
+ sample data, a daily materialized view, and a monthly materialized
+ view. A sample dataset of IoT vehicle telemetry data represents
+ the motor vehicle trips taken by customers. It’s loaded into the
+ collection named 'customerTripRaw'. The dataset can be found on
+ GitHub and can be loaded via MongoImport or other methods. To
+ create a materialized view, a scheduled trigger executes a
+ function that runs an aggregation pipeline, generating a daily
+ summary of the raw IoT data and placing it in a materialized view
+ collection named 'customerTripDaily'. Similarly, for a monthly
+ materialized view, a scheduled trigger executes a function that
+ runs an aggregation pipeline, summarizing the information in the
+ 'customerTripDaily' collection on a monthly basis and placing it
+ in a materialized view collection named 'customerTripMonthly'.
+
+ See the following GitHub repos to create the data processing pipeline:
+
+ - Step 1: `Load the sample data `__
+ - Step 2: `Setup a daily cron job `__
+ - Step 3: `Setup a monthly cron job `__
+
+ .. figure:: /includes/images/industry-solutions/Writing Fig3.svg
+ :figwidth: 1200px
+ :alt: An illustration shows how to create a data processing pipeline
+
+ Figure 3: Creating a data processing pipeline
+
+ .. step:: Automating insurance premium calculations with a machine learning model
+
+ The decision-processing component of this example consists of a
+ scheduled trigger that collects the necessary data and posts the
+ payload to a Databricks ML Flow API endpoint. (The model was
+ previously trained using the MongoDB Spark Connector on
+ Databricks.) It waits for the model to respond with a calculated
+ premium based on the miles driven by a given customer in a month.
+ Then the scheduled trigger updates the 'customerPolicy' collection
+ to append a new monthly premium calculation as a new subdocument
+ within the 'monthlyPremium' array.
+
+ See the following GitHub repos to create the data processing pipeline:
+
+ - Step 4: `Setup a calculate premium trigger `__
+ - Step 5: `Setup the Databricks connection `__
+ - Step 6: `Write the machine learning model prediction to MongoDB `__
+
+ .. figure:: /includes/images/industry-solutions/Writing Fig4.svg
+ :figwidth: 1200px
+ :alt: Automating Calculations with Machine Learning Model
+
+ Figure 4: Automating Calculations with Machine Learning Model
+
+ .. step:: Near-real-time insights of insurance premium changes over time
+
+ Once the monthly premium calculations have been appended, it’s
+ easy to set up Atlas Charts to visualize your newly calculated
+ usage-based premiums. Configure different charts to see how
+ premiums have changed over time to discover patterns.
+
+Technologies and Products Used
+------------------------------
+
+MongoDB modern, multi-cloud database platform:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- `Atlas Database `__
+- `Aggregation Pipelines `__
+- `Materialized Views `__
+- `Time Series `__
+- `MongoDB Spark Connector `__
+- `Atlas Charts `__
+- `Atlas App Services `__
+ - `Triggers `__
+ - `Functions `__
+
+Partner technologies:
+~~~~~~~~~~~~~~~~~~~~~
+
+- `Databricks `__
+
+Key Considerations
+------------------
+
+- Building materialized views on time series data: refer to steps 1-3 in the GitHub repo.
+- Leveraging aggregation pipelines for cron expressions: refer to steps 2 or 3 in the GitHub repo.
+- Serving machine learning models with MongoDB Atlas data: refer to step 4 in the GitHub repo.
+- Writing a machine learning model prediction to an Atlas database: refer to step in the GitHub repo.
+- Visualizing near-real-time insights of continuously changing model results: refer to the Bonus step in the GitHub repo.
+
+Author
+------
+
+- Jeff Needham, MongoDB
+- Ainhoa Múgica, MongoDB
+- Luca Napoli, MongoDB
+- Karolina Ruiz Rogelj, MongoDB
diff --git a/source/solutions-library/insurance-app-driven-analytics.txt b/source/solutions-library/insurance-app-driven-analytics.txt
index 1b45a9a7..1d61331a 100644
--- a/source/solutions-library/insurance-app-driven-analytics.txt
+++ b/source/solutions-library/insurance-app-driven-analytics.txt
@@ -1,2 +1,4 @@
.. toctree::
- :titlesonly:
\ No newline at end of file
+ :titlesonly:
+
+ Digital Underwriting
\ No newline at end of file
diff --git a/source/solutions-library/insurance.txt b/source/solutions-library/insurance.txt
index 2ed39718..91ae8b77 100644
--- a/source/solutions-library/insurance.txt
+++ b/source/solutions-library/insurance.txt
@@ -1,6 +1,6 @@
.. toctree::
:titlesonly:
+ App-Driven Analytics
Gen AI
-.. App-Driven Analytics
\ No newline at end of file
diff --git a/source/solutions-library/retail.txt b/source/solutions-library/retail.txt
index f8aa90dd..291f2751 100644
--- a/source/solutions-library/retail.txt
+++ b/source/solutions-library/retail.txt
@@ -2,6 +2,5 @@
:titlesonly:
Catalog
- Gen AI
-
-.. Personalization
\ No newline at end of file
+ Personalization
+.. Gen AI
\ No newline at end of file