Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
3644b3a
Add blog functionality and multiple posts on generative AI
ahzan-dev Aug 26, 2025
6be5a31
Add blog hero section with customizable background image and styling
ahzan-dev Aug 26, 2025
9092864
Merge pull request #2666 from jaseci-labs/main
ahzan-dev Aug 29, 2025
ec46e94
Merge branch 'main' of https://github.com/jaseci-labs/jaseci into blo…
RanuriG Sep 1, 2025
c8f9d8c
Merge branch 'main' into blog-feature
RanuriG Sep 9, 2025
a84a52e
yml changed
RanuriG Sep 9, 2025
577acaa
linting fixed
RanuriG Sep 9, 2025
2623309
linting fixed
RanuriG Sep 9, 2025
3ea34b7
yml issue fixed
RanuriG Sep 9, 2025
c6691d8
removed emoji
RanuriG Sep 9, 2025
cb115e8
hooks issue
RanuriG Sep 9, 2025
2d30ed6
lint
RanuriG Sep 9, 2025
ff60784
to the main issue
RanuriG Sep 9, 2025
ae79f63
Remove readtime field from blog post metadata
ahzan-dev Sep 9, 2025
a2645eb
Refactor MkDocs hooks and remove unused social media script
ahzan-dev Sep 9, 2025
d767ea3
Merge pull request #2766 from jaseci-labs/main
ahzan-dev Sep 10, 2025
fecdce7
blogs added
RanuriG Sep 10, 2025
888fda7
blogs changed
RanuriG Sep 10, 2025
fa4aa80
refactor: restructure roadmap documentation and add community hub con…
ahzan-dev Sep 10, 2025
38a3f97
linting issues fixed
RanuriG Sep 10, 2025
581ba01
Merge branch 'blog-feature' of https://github.com/jaseci-labs/jaseci …
RanuriG Sep 10, 2025
7649ccb
undo last commit
ahzan-dev Sep 10, 2025
25499a4
refactor: clean up HTML structure in post template
ahzan-dev Sep 10, 2025
ab490a0
feat: implement subscription form for community hub
ahzan-dev Sep 12, 2025
6d9b29f
Revert "feat: implement subscription form for community hub"
ahzan-dev Sep 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions docs/docs/blog/.authors.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
authors:
jaseci:
name: Jaseci
description: Creator
avatar: https://www.jaseci.org/wp-content/uploads/2024/05/Jaseci-logo.png
forbes:
name: Forbes
description: Writer
avatar: https://www.jaseci.org/wp-content/uploads/2022/07/forbes-logo.png
nvidia:
name: Nvidia
description: Writer
avatar: https://www.jaseci.org/wp-content/uploads/2022/06/nvidia-logo.png
v75:
name: V75 Incorparated
description: Writer
avatar: https://www.jaseci.org/wp-content/uploads/2022/07/v75.jpg
Binary file added docs/docs/blog/images/post-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-10-1.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-10-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-11-1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-11-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-11-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-12.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-2.1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-2.2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-3.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-4.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-7.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-8.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-9-1.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/docs/blog/images/post-9-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/docs/blog/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Blog
52 changes: 52 additions & 0 deletions docs/docs/blog/posts/post1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
date:
created: 2023-12-31
updated: 2024-01-02

categories:
- AI + Business
tags:
- AI + Business
authors:
- jaseci
cover_image: images/post-1.png
title: "Harnessing the Power of Generative AI: Insights and Use Cases from Jason Mars"
---

# Harnessing the Power of Generative AI: Insights and Use Cases from Jason Mars

<!-- more -->

![Jason Mars talks](../images/post-1.png){ width="80%" }

_This article is based on a talk by Jason Mars at the Michigan Technology Leaders Summit, presented by SIM Detroit, on the topic of Artificial Intelligence: Actual Use Cases._

## What criteria do you use to prioritize AI projects in your portfolio and how often do you ensure alignment with broader business objectives?

In the last year, we’ve partnered with companies to bring two generative AI solutions to production, observing key drivers across multiple sectors. These drivers can be categorized into two main themes.

Firstly, companies see AI as an opportunity to scale productivity, market output, and business efficiency. On the other hand, they perceive AI as a risk, navigating a new competitive landscape where survival depends on capitalizing on AI opportunities. This is particularly evident in the financial sector, where companies recognize the need to compete in a more efficient market.

The launch of ChatGPT marked a significant shift in the market, with VC investment in AI skyrocketing from $2 billion to $14 billion within six months. Companies realize that to survive, they need to compete in a higher efficiency landscape. This realization has led to a surge in market and internal capability analyses to identify the safest starting points for AI implementation.

## Financial & Business Use Cases of Generative AI

A notable trend is the democratization of AI, which has become a mandate rather than an option. Smaller, specialized models have emerged, offering cost-effective alternatives to large foundation models like GPT-4.

For example, we’ve worked with PocketNest, a Michigan-based company, to build a conversational AI focused on financial advice. By using smaller open-source models, we’ve significantly reduced costs while maintaining competitive quality.

Another innovative use case is TOBU, a product that allows users to attach memories to pictures through a conversational AI. This AI interacts with users about their experiences and the context of their photos, creating a personalized memory assistant.

## What are some of the most significant challenges you’ve encountered in implementing these projects and what have you learned from it?

Cost remains a significant challenge when scaling AI solutions. While development costs might be manageable during the initial phase, launching a product to thousands of users can become prohibitively expensive. This has driven a deeper investigation into using challenger models like Mistral and Llama to balance cost and performance effectively.

Expertise within partner companies also plays a crucial role in the successful deployment of AI. Our consulting approach involves delivering IP and production-ready AI engines while navigating challenges such as bounding AI use cases to prevent liability and ensuring that AI solutions remain within desired parameters.

## How do you assess state and federal regulations, security and the privacy?

As we consume more information through screens, the realm of AI expands. It’s essential to control this growth thoughtfully, ensuring AI remains a tool for good. The current phase of AI development involves creating tools to harness the full potential of these models, akin to refining raw ore.

Tools like Lang Chain, funded by Sequoia, exemplify the innovation in this space. These tools help solve bigger problems by enabling seamless interaction with large language models. Personalization remains a key focus, making AI more accessible and tailored to individual needs.

In conclusion, the landscape of AI is evolving rapidly, with companies seeking to balance opportunities and risks. Through careful analysis, cost-effective solutions, and innovative use cases, businesses can harness the power of generative AI to drive efficiency and personalization in unprecedented ways.
29 changes: 29 additions & 0 deletions docs/docs/blog/posts/post10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
date:
created: 2023-12-31
updated: 2024-01-02

categories:
- Developers
tags:
- Developers
authors:
- v75
cover_image: images/post-10-1.jpeg
title: "Tech firm that delivered AI for US Bank and Barclays organizes a Jaseci Hackathon"
---

# Tech firm that delivered AI for US Bank and Barclays organizes a Jaseci Hackathon

<!-- more -->

![Hackathon](../images/post-10-1.jpeg){ width="50%" }

One of Guyana’s prominent technology firms which specializes in conversational AI and enterprise systems development, V75 Inc organized a two-day boot camp to get their technical team ramped up on a relatively new but powerful technology stack called Jaseci (find out more [here](https://jaseci.org)). Jaseci is an open source AI ecosystem bringing with it an open computational model, technology stack and methodology designed to enable developers to rapidly build robust products with sophisticated AI capabilities, at scale. V75 Inc has been in the pro-serve technology business since 2014 through its predecessor-in-interest Version75 Solutions which was later incorporated as V75 Inc in 2019. In 2018 the company partnered with conversational AI firm, Clinc Inc based in Ann Arbor, MI and entered the conversational AI engineering space, which at its peak, saw twenty-five certified conversational AI engineers from V75 that helped build over 90% of Clinc’s deliveries to clients such as OCBC, US Bank, Barclays and others.

![Launch](../images/post-10-2.png){ width="50%" }


As a progressive technology start-up in a developing country, V75 Inc particularly appreciates the importance of wisely investing the relatively limited human and financial resources available, especially operating in a environment following the pandemic. V75’s leadership immediately recognized the immense value that the Jaseci open source ecosystem could bring, not just for their planned pro-serve deliveries but for their aspirations to enter the product space. The design of Jaseci’s technology stack would provide enough abstraction and developer ease-of-use to wield AI engines and handle complex infrastructure challenges with deployment without the requirement of deep domain knowledge.

From April 26-27, V75’s leadership organized a two-day Jaseci Hackathon to serve as a bootcamp to ramp up select members of V75’s technical staff on the new technology. A total of eighteen members joined remotely and in-person for this event. The event was led by V75’s founder, Eldon Marks, who underwent a personal ramp-up journey on the technology stack before organizing the knowledge-sharing exercise in the form of the hackathon. During the hackathon proceedings, Jaseci Labs cofounder and the creator of Jaseci, Prof. Jason Mars joined in to introduce Jaseci to the team and answer questions about the stack and its capabilities. The two-day hackathon took the team through the set-up of Jaseci, the basics of its glue language called JAC as well as the development paradigm of the stack. By the second day, the team was building out their planned conversational flows and leveraging the Universal Sentence Encoder in the creation of a pre-trained, chatbot that was surprisingly capable at handling question-answer type exchanges. The team also observed that Jaseci provided a very robust micro-services based infrastructure upon which APIs could be built, with or without AI capabilities.
60 changes: 60 additions & 0 deletions docs/docs/blog/posts/post11.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
date:
created: 2023-12-31
updated: 2024-01-02

categories:
- Developers
tags:
- Developers
authors:
- jaseci
cover_image: images/post-11-1.jpg
title: "Setting Up Jaseci On Apple M1 Macs (ARM Processors)"
---

# Setting Up Jaseci On Apple M1 Macs (ARM Processors)

<!-- more -->

![Processor](../images/post-11-1.jpg){ width="60%" }

Core Jaseci and its built-in libraries run great on an M1 mac, with Rosetta enabled. However, using packages such as use_qa can result in errors such as Illegal instruction: 4, followed by python crashing and a VERY LONG list of errors, most of which you cannot make sense of…

![Code 1](../images/post-11-2.png)

## The Short (and Sweet) Way… Use Remote Actions

This is also my favorite method to use…

To load use_qa and other jaseci modules, you can use the remote modules set up by our Sifus. To do this, basically, replace `actions load module jaseci_kit.use_qa` with `actions load remote https://use-qa.jaseci.org`


![Code 2](../images/post-11-3.png)

And, that’s all folks…

Or is it? we are hardcore programmers and don’t feel satisfied using the “short & sweet way” of doing things, do we? So let’s look at…

## The Other Way

This is the fun way, that’ll require reading a host of documentation… which we’ll try to avoid… but then end up reading it through thoroughly after hours of avoiding reading the documentation…

So here’s that perfect setup we need to get this all up and running.

### Update Jaseci & Packages

Ensure you’re running the correct version of Python needed for the version of Jaseci you’re running, then update Jaseci & Jaseci Kit by running.

- `pip3 install jaseci --upgrade`
- `pip3 install jaseci-kit --upgrade`

### Tensorflow

Turns out, that Tensorflow does not work too nicely with the new ARM processors that apple is using and requires special versions to run on the new M1 processors… So you’d need to read this article from Apple: [https://developer.apple.com/metal/tensorflow-plugin/](https://developer.apple.com/metal/tensorflow-plugin/)

*Quick Note*: Ensure the TensorFlow version you’re installing matches the requirements of the Jaseci version you’re currently running, else it won’t work.

### Test Tensorflow

Ensure you test TensorFlow using the Jupyther example in the video above (start at the 5-minute mark if you installed Tensorflow from the article) to ensure that TensorFlow is running smoothly.
24 changes: 24 additions & 0 deletions docs/docs/blog/posts/post12.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
date:
created: 2023-12-31
updated: 2024-01-02

categories:
- Developers
tags:
- Developers
authors:
- jaseci
cover_image: images/post-12.png
title: "Inter-American Development Bank funded Jaseci AI apprenticeship program – July 1, 2022"
---

# Inter-American Development Bank funded Jaseci AI apprenticeship program – July 1, 2022

<!-- more -->

![development](../images/post-12.png){ width="60%" }

Select participants from the Jaseci AI / Spark program, which saw nearly two hundred Guyanese youth upskilled in leadership and AI tracks, gained the opportunity to further their foundational AI training through the Jaseci AI apprenticeship program. This program was funded by the Inter-American Development Bank (IDB) Lab under project GY-T1162 – Developing Guyana’s ICT Sector. The IDB is a development-focused bank that provides multilateral financing and expertise for sustainable economic and institutional development in Latin America and the Caribbean. The project’s executing agency, Nexus Hub Inc., in collaboration with Jaseci Labs LLC worked with local technology firm, V75 Inc. to facilitate the industrial attachment of ten (10) graduates from the Jaseci AI / Spark program comprising individuals from high school, the University of Guyana, Computer Science Department and local industry.

The ten chosen apprentices will undergo a focused, three month industrial attachment with V75 Inc. as they work on building various AI products to deepen their understanding of the Jaseci Open Source ecosystem as well as the field of AI. The program will be executed remotely, allowing the participants to work in their own time but according to scheduled milestone deliveries. Each participant will receive a monthly stipend as they progress through the program as well as the opportunity to be drafted into one of the AI-focused teams within Jaseci Labs or V75 Inc. The program also presents the apprentices with a tech entrepreneurship pathway with the opportunity to continue to develop their AI products towards commercialization under the guidance of Jaseci Labs and the support of V75 Inc.
79 changes: 79 additions & 0 deletions docs/docs/blog/posts/post2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
date:
created: 2023-12-31
updated: 2024-01-02
categories:
- Developers
tags:
- Developers
authors:
- forbes
cover_image: images/post-2.jpg
title: "A Whale of a Tale: The Size-Matters Misconception For Generative AI"
---

# A Whale of a Tale: The Size-Matters Misconception For Generative AI

<!-- more -->

![Jason Mars talks](../images/post-2.jpg)

In this new age of generative AI, everyone has made a major assumption for which a pressing question has emerged. You can see this manifesting in certain nerdy corners of social media.

This screenshot of Reddit and Twitter posts shows the rising curiosity about the effectiveness of small, open-source models versus their large, proprietary counterparts. Indeed, in this noisy market, there is widespread confusion and curiosity surrounding this topic. The narrative that “bigger is inherently better” is about to be challenged.

![_Is bigger necessarily better when it comes to AI models? Maybe not.](../images/post-2.1.png){ width="60%" }

_Is bigger necessarily better when it comes to AI models? Maybe not._

Well, along with my amazing colleagues at the University of Michigan and Jaseci Labs, we’ve delved into this very question in a rigorous and scholarly way. We’ve produced the first academic paper that addresses the debate head-on, to be presented in the prestigious ISPASS 2024 proceedings.

Our findings are not just surprising; they are a call to rethink what we know about the size of AI models we should be relying on in production and commercial use cases, and the efficiency we can achieve.

## Two Major AI Contenders

So, let’s talk about the two contenders, the Large Language Models (GPT4 and friends) vs the Small Language Models. Open AI has published GPT4 models that are at least 540 gigabytes, while the small and open-source models we study in the paper are around three gigabytes. That’s around 200 times smaller.

To illustrate the comparison, imagine GPT-4 as the blue whale, the largest animal on the planet, weighing up to about 200 tons. Now, contrast this with a housefly, a creature so small it’s easy to overlook, weighing in at a mere 12 milligrams.

These houseflies would be our models like LLaMA-7b quantized, Mistral-7b quantized, and Starling-LM-7b quantized — smaller, open-source alternatives poised to challenge the notion that bigger always means better. This comparison represents the difference in scale between the models we study in the paper.

The core discovery in the paper is simple: the belief that one must wield a GPT-4-sized model to achieve significant results is a myth.

## Our Approach

Our research was conducted with open and quantized models and gpt4 itself. Our investigation was centered around a case study with the commercial Myca.ai product, a productivity tool enhanced by AI to deliver personalized pep talks based on your productivity. The results, as detailed in our paper, are nothing short of shocking even to us.

We asked three simple questions. Can end users tell a quality loss in response when using the housefly models? How much faster are the AI responses with the smaller open models? And how much cheaper is it?

## On Quality

![Response quality of GPT-4 and SLMs as rated by human reviewers.](../images/post-2.2.png){ width="40%" }

_Response quality of GPT-4 and SLMs as rated by human reviewers._

When participants were subjected to a blind test comparing the output of large proprietary models against that of smaller, open-source models, the results were revelatory. Like the famed Pepsi/Coke taste tests, users were hard-pressed to discern which model produced the output. Indeed, much of the time, OpenAI’s GPT4 was not selected or scored very poorly. GPT4 was selected as the better output only around half the time than an SLM. For many (perhaps most) practical product use cases, SLMs do not only as well as sometimes even better than generalized proprietary LLMs. This result underscored the competency of smaller models in delivering quality content indistinguishable from their larger counterparts.

## On Speed

Further analysis revealed that these smaller models are up to 10 times faster than GPT-4 on our own machines in an AWS cluster and offer greater reliability. The latency of response was consistent all day long.

And Myca.ai didn’t suffer from the outages that OpenAI has become known for. Given that our housefly models are not tethered to the operational integrity of any single provider, they remain unaffected by these outages that can impact any of the larger, proprietary models.

## On Cost

Perhaps most compelling is the cost advantage. Our research indicates that deploying small, open models can be anywhere from five to 23 times cheaper than relying on a model like GPT-4. This range represents a worst-case to best-case scenario, highlighting the substantial financial benefits that come with adopting smaller models.

## The Groundbreaking Insight

When you opt for smaller, more accessible models, you not only gain control but also empower yourself with the ability to tailor the technology to your needs. Businesses, for example, can take these open-source models and adapt them, even going as far as training them in-house, without the prohibitive costs associated with larger models.

Our findings invite a paradigm shift in how we approach the development and deployment of AI models. The evidence is clear: smaller, open-source models not only stand toe-to-toe with their gargantuan counterparts in terms of intelligence and capability but also offer critical advantages.

Indeed, Jaseci Labs is now helping businesses tailor their own small models for game-changing product use cases, leading to what may be a major description of the OpenAIs and Anthropics of the world.

We encourage you to delve into the peer-reviewed analysis presented in our paper. Let the truth behind this rigorous analysis guide your decisions as you navigate the future of AI, and consider how embracing smaller models could not only enhance your technological endeavors but also democratize access to this groundbreaking field.

This article was originally posted on Forbes.com, click the link below to read the complete article.

[Read the full article on Forbes](https://www.forbes.com/sites/forbesbooksauthors/2024/03/21/a-whale-of-a-tale-the-size-matters-misconception-for-generative-ai/?sh=40121c8c581a)
Loading