Skip to content

Latest commit

 

History

History

.github

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Gemini AI Banner

Docs | GitHub | FAQ

Features

Why Gemini AI

Why should I use this, instead of Google's own API?

It's all about simplicity. Gemini AI allows you to make requests to Gemini at just about a quarter of the code necessary with Google's API.

Don't believe me? Take a look.

Google's own API (CommonJS):

const { GoogleGenerativeAI } = require("@google/generative-ai");

const genAI = new GoogleGenerativeAI(API_KEY);

async function run() {
	const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro-latest" });

	const prompt = "Hi!";

	const result = await model.generateContent(prompt);
	const response = await result.response;
	const text = response.text();
	console.log(text);
}

run();

Gemini AI (ES6 Modules):

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);
console.log(await gemini.ask("Hi!"));

That's nearly 4 times less code!

And there's no sacrifices either. Gemini AI uses Google's REST API under the hood, so you get simplicity without compromise.

And, there's also more!

Table of Contents

Getting an API Key

  1. Go to Google AI Studio's API keys tab
  2. Follow the steps to get an API key
  3. Copy this key, and use it below when API_KEY is mentioned.

Warning

Do not share this key with other people! It is recommended to store it in a .env file.

Quickstart

Make a text request:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(await gemini.ask("Hi!"));

Make a streaming text request:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

gemini.ask("Hi!", {
	stream: console.log,
});

Chat with Gemini:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);
const chat = gemini.createChat();

console.log(await chat.ask("Hi!"));
console.log(await chat.ask("What's the last thing I said?"));

Other useful features

Make a text request with images:
import fs from "fs";
import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	await gemini.ask(["What do you see?", fs.readFileSync("./cat.png")])
);
Make a text request with custom parameters:
import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	await gemini.ask("Hello!", {
		temperature: 0.5,
		topP: 1,
	})
);
Embed text:
import fs from "fs";

const gemini = new Gemini(API_KEY);

console.log(await gemini.embed("Hi!"));

Special Features

Streaming

Here's a quick demo:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

gemini.ask("Write an essay", {
	stream: (x) => process.stdout.write(x),
});

Let's walk through what this code is doing. Like always, we first initialize Gemini. Then, we call the ask function, and provide a stream config. This callback will be invoked whenever there is new content coming in from Gemini!

Note that this automatically cuts to the streamContentGenerate command... you don't have to worry about that!

Tip

Realize that you don't need to call ask async if you're handling stream management on your own. If you want to tap the final answer, it still is returned by the method, and you call it async as normal.

Types

Gemini AI v2 is completely written in TypeScript, which means that all parameters, and more importantly configuration, have type hints.

Furthermore, return types are also conditional based on what format you place in the configuration to guarentee great DX.

Optimized File Uploads

Google requires large files to be sent through their dedicated File API, instead of being included directly in the POST request.

With Gemini AI v2, large files like videos and audio will automatically be detected and sent through the File API, while smaller images are still included inline—without you having to worry about any of that going on.

This ensures the fastest file upload experience, while ensuring all your files are safely included.

Gemini also automatically detects the MIME type of your file to pass to the server, so you don't need to worry abotu it.

Proxy Support

Use a proxy when fetching from Gemini. To keep package size down and adhere to the SRP, the actual proxy handling is delegated to the undici library.

Here's how to add a proxy:

Install undici:

npm i undici

Initialize it with Gemini AI:

import { ProxyAgent } from "undici";
import Gemini from "gemini-ai";

let gemini = new Gemini(API_KEY, {
	dispatcher: new ProxyAgent(PROXY_URL),
});

And use as normal!

Documentation

Inititalization

To start any project, include the following lines:

Note

Under the hood, we are just running the Gemini REST API, so there's no fancy authentication going on! Just pure, simple web requests.

// Import Gemini AI
import Gemini from "gemini-ai";

// Initialize your key
const gemini = new Gemini(API_KEY);

Learn how to add a fetch polyfill for the browser here.

Method Patterns

All model calling methods have a main parameter first (typically the text as input), and a config second, as a JSON. A detailed list of all config can be found along with the method. An example call of a function may look like this:

await gemini.ask("Hi!", {
	// Config
	temperature: 0.5,
	topP: 1,
});

Tip

All methods (except Gemini.createChat()) are async! This means you should call them something like this: await gemini.ask(...)

JSON Output

You have the option to set format to Gemini.JSON

await gemini.ask("Hi!", {
	format: Gemini.JSON,
});

This gives you the full response from Gemini's REST API.

Note that the output to Gemini.JSON varies depending on the model and command, and is not documented here in detail due to the fact that it is unnecessary to use in most scenarios. You can find more information about the REST API's raw output here.

If you are using typescript, you get type annotations for all the responses, so autocomplete away.

Gemini.ask()

This method uses the generateContent command to get Gemini's response to your input.

Uploading Media

The first parameter of the ask() method can take in 3 different forms:

String Form:

This is simply a text query to Gemini.

Example:

await gemini.ask("Hi!");
Array Form:

In this array, which represents ordered "parts", you can put strings, or Buffers (these are what you get directly from fs.readFileSync()!). These will be fed, in order to Gemini.

Gemini accepts most major file formats, so you shouldn't have to worry about what format you give it. However, check out a comprehensive list here.

There's a whole ton of optimizations under the hood for file uploads too, but you don't have to worry about them! Learn more here...

Example:

import fs from "fs";

await gemini.ask([
	"Between these two cookies, which one appears to be home-made, and which one looks store-bought? Cookie 1:",
	fs.readFileSync("./cookie1.png"),
	"Cookie 2",
	fs.readFileSync("./cookie2.png"),
]);

Note

that you can also place buffers in the data field in the config (this is the v1 method, but it still works). These buffers will be placed, in order, directly after the content in the main message.

Message Form:

This is the raw message format. It is not meant to be used directly, but can be useful when needing raw control over file uploads, and it is also used internally by the Chat class.

Please check src/types.ts for more information about what is accepted in the Message field.

Config Available:

Note

These are Google REST API defaults.

Field Name Description Default Value
format Whether to return the detailed, raw JSON output. Typically not recommended, unless you are an expert. Can either be Gemini.JSON or Gemini.TEXT Gemini.TEXT
topP See Google's parameter explanations 0.94
topK See Google's parameter explanations. Note that this field is not available on v1.5 models. 32
temperature See Google's parameter explanations 1
model gemini-1.5-flash-latest
maxOutputTokens Max tokens to output 2048
messages Array of [userInput, modelOutput] pairs to show how the bot is supposed to behave []
data An array of Buffers to input to the model. It is recommended that you directly pass data through the message in v2. []
stream A function that is called with every new chunk of JSON or Text (depending on the format) that the model receives. Learn more undefined

Example Usage:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	await gemini.ask("Hello!", {
		temperature: 0.5,
		topP: 1,
	})
);

Gemini.count()

This method uses the countTokens command to figure out the number of tokens in your input.

Config available:

Field Name Description Default Value
model Which model to use. Can be any model Google has available, but reasonably must be gemini-pro Automatic based on Context

Example Usage:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(await gemini.count("Hello!"));

Gemini.embed()

This method uses the embedContent command (currently only on embedding-001) to generate an embedding matrix for your input.

Config available:

Field Name Description Default Value
model Which model to use. Can be any model Google has available, but reasonably must be embedding-001 embedding-001

Example Usage:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(await gemini.embed("Hello!"));

Gemini.createChat()

Gemini.createChat() is a unique method. For one, it isn't asynchronously called. Additionally, it returns a brand new Chat object. The Chat object only has one method, which is Chat.ask(), which has the exact same syntax as the Gemini.ask() method, documented above. The only small difference is that most parameters are passed into the Chat through createChat(), and cannot be overriden by the ask() method. The only parameters that can be overridden is format, stream, and data.

All important data in the Chat object is stored in the Chat.messages variable, and can be used to create a new Chat that "continues" the conversation, as will be demoed in the example usage section.

Config available for createChat:

Field Name Description Default Value
topP See Google's parameter explanations 0.94
topK See Google's parameter explanations. Note that this field is not available on v1.5 models. 10
temperature See Google's parameter explanations 1
model gemini-1.5-flash-latest
maxOutputTokens Max tokens to output 2048
messages Array of [userInput, modelOutput] pairs to show how the bot is supposed to behave (or to continue a conversation) []

Example Usage:

// Simple example:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

const chat = gemini.createChat();

console.log(await chat.ask("Hi!"));

// Now, you can start a conversation
console.log(await chat.ask("What's the last thing I said?"));
// "Continuing" a conversation:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

const chat = gemini.createChat();

console.log(await chat.ask("Hi!"));

// Creating a new chat, with existing messages

const newChat = gemini.createChat({
	messages: chat.messages,
});

console.log(await newChat.ask("What's the last thing I said?"));

FAQ

What's the difference between data and directly passing buffers in the message?

data was the old way to pass Media data. It is now not recommended, but kept for backwards compatability. The new method is to simply pass an array of strings/buffers into the first parameter of ask(). The major benefit is now you can include strings between buffers, which you couldn't do before. Here's a quick demo of how to migrate:

With data:

import fs from "fs";

await gemini.ask(
	"Between these two cookies, which one appears to be home-made, and which one looks store-bought?",
	{
		data: [fs.readFileSync("./cookie1.png"), fs.readFileSync("./cookie2.png")],
	}
);

New Version:

import fs from "fs";

await gemini.ask([
	"Between these two cookies, which one appears to be home-made, and which one looks store-bought?",
	fs.readFileSync("./cookie1.png"),
	fs.readFileSync("./cookie2.png"),
]);

Learn more in the dedicated section.

What do I need to do for v2?

Does everything still work?

Yes! Gemini AI v2 should completely be backward-compatible. Most changes are under-the-hood, so your DX should be much smoother, especially for TS developers!

The only thing that you can consider changing is using the new array message format instead of the old buffer format. See the dedicated question to learn more.

What is the default model?

And, by extension, why is it the default model?

By default, Gemini AI uses gemini-1.5-flash-latest, Google's leading efficiency-based model. The reason that this is the default model is because of two main reasons regarding DX:

  1. 📈 Higher Rate Limits: Gemini 1.5 Pro is limited to 2 requests per minute, versus the 15 for Flash, so we choose the one with the higher rate limit, which is especially useful for development.
  2. Faster Response Time: Gemini 1.5 Pro is a significant amount slower, so we use the faster model by default.

But, of course, should you need to change the model, it's as easy as passing it into the configuration of your request. For example:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY);

console.log(
	await gemini.ask("Hello!", {
		model: "gemini-1.5-pro-latest",
	})
);

Changing the API Version

What if I want to use a deprecated command?

When initializing Gemini, you can pass in an API version. This feature mainly exists to futureproof, as the current recommended API version (and the one used) is v1beta. Note that some modern models (including the default Gemini 1.5 Flash) may not work on other API versions.

Here's how you can change it to, say, v1:

import Gemini from "gemini-ai";

const gemini = new Gemini(API_KEY, {
	apiVersion: "v1",
});

How to Polyfill Fetch

I'm in a browser environment! What do I do?

Everything is optimized so it works for both browsers and Node.js—Files are passed as Buffers, so you decide how to get them, and adding a fetch polyfill is as easy as:

import Gemini from "gemini-ai";
import fetch from "node-fetch";

const gemini = new Gemini(API_KEY, {
	fetch: fetch,
});

Contributors

A special shoutout to developers of and contributors to the bard-ai and palm-api libraries. Gemini AI's interface is heavily based on what we have developed on these two projects.