An AI-powered tool that generates, refines, and optimizes LinkedIn posts using Google Gemini and LangGraph.
- AI-Powered Generation: Generate 3 diverse LinkedIn post options from a single topic
- Iterative Refinement: Refine posts based on your feedback until perfect
- Session Management: Maintain state across multiple refinement iterations
- RESTful API: Clean, well-documented API endpoints
- Type-Safe: Full type annotations with Pydantic models
- Production-Ready: Comprehensive error handling, structured logging, and tests
┌─────────────────┐
│ FastAPI App │
│ (REST API) │
└────────┬────────┘
│
┌────────▼────────┐
│ Session Service │
│ (In-Memory) │
└────────┬────────┘
│
┌────────▼────────┐
│ LangGraph │
│ State Machine │
└────────┬────────┘
│
┌────────▼────────┐
│ Gemini LLM │
│ Provider │
└─────────────────┘
- Framework: FastAPI
- LLM Orchestration: LangGraph
- LLM Provider: Google Gemini 2.5 Pro
- Validation: Pydantic
- Testing: Pytest
- Linting: Ruff
- Package Manager: uv
- Python 3.12+
- uv package manager
- Google Gemini API key
- Clone the repository:
git clone https://github.com/yourusername/linkedin-agent.git
cd linkedin-agent- Set up the backend:
cd backend
uv sync --dev- Create a
.envfile in the backend directory:
GOOGLE_API_KEY=your_gemini_api_key_here
LOG_LEVEL=INFOcd backend
uv run uvicorn src.main:app --reloadThe API will be available at http://localhost:8000
docker-compose up- Backend API:
http://localhost:8088 - Frontend:
http://localhost:8090
Once running, visit:
- Swagger UI:
http://localhost:8000/docs(development) orhttp://localhost:8088/docs(Docker) - ReDoc:
http://localhost:8000/redoc(development) orhttp://localhost:8088/redoc(Docker)
curl -X POST "http://localhost:8000/sessions" \
-H "Content-Type: application/json" \
-d '{"topic": "The importance of clean code in software development"}'Response:
{
"id": "uuid-here",
"topic": "The importance of clean code in software development",
"options": [
{
"id": "opt-1",
"content": "Clean code is not just about making code work..."
},
{
"id": "opt-2",
"content": "Writing maintainable code is an investment..."
},
{
"id": "opt-3",
"content": "Code quality matters because..."
}
],
"selected_option_id": null,
"final_post": null
}curl "http://localhost:8000/sessions/{session_id}"curl -X POST "http://localhost:8000/sessions/{session_id}/refine" \
-H "Content-Type: application/json" \
-d '{
"option_id": "opt-1",
"refinement_prompt": "Make it shorter and add an emoji"
}'curl -X POST "http://localhost:8000/sessions/{session_id}/finalize" \
-H "Content-Type: application/json" \
-d '{"option_id": "opt-2"}'curl -X DELETE "http://localhost:8000/sessions/{session_id}"backend/
├── src/
│ ├── api/
│ │ ├── routes/
│ │ │ ├── health.py # Health check endpoint
│ │ │ └── generation.py # Session management endpoints
│ │ └── error_handlers.py # Global error handlers
│ ├── core/
│ │ ├── llm/
│ │ │ ├── base.py # LLM provider interface
│ │ │ └── gemini.py # Gemini implementation
│ │ ├── graph/
│ │ │ ├── state.py # LangGraph state definition
│ │ │ ├── nodes.py # Graph nodes (generate, refine, finalize)
│ │ │ └── builder.py # Graph construction
│ │ ├── prompts/
│ │ │ └── templates.py # Prompt templates
│ │ ├── exceptions.py # Custom exceptions
│ │ └── logging.py # Logging configuration
│ ├── models/
│ │ └── schemas.py # Pydantic models
│ ├── services/
│ │ └── session.py # Session storage service
│ ├── config.py # App configuration
│ └── main.py # FastAPI app
├── tests/
│ ├── test_session_service.py
│ ├── test_exceptions.py
│ └── test_api.py
├── pyproject.toml
├── pytest.ini
└── Dockerfile
cd backend
uv run pytestWith coverage report:
uv run pytest --cov=src --cov-report=html# Lint with Ruff
uv run ruff check src/
# Format with Ruff
uv run ruff format src/The API uses custom exceptions with proper HTTP status codes:
400 Bad Request: Invalid input (empty topic, invalid refinement)404 Not Found: Session or option not found503 Service Unavailable: LLM provider errors500 Internal Server Error: Unexpected errors
All errors return JSON in this format:
{
"error": "ErrorClassName",
"message": "Human-readable error message",
"path": "/api/endpoint"
}Structured logging with key-value pairs:
timestamp=2025-12-27T10:30:45 level=INFO logger=src.api.routes.generation message=Creating session session_id=abc-123 topic=Clean code
timestamp=2025-12-27T10:30:47 level=INFO logger=src.api.routes.generation message=Session created successfully session_id=abc-123 duration_ms=2341 options_count=3
Configure log level via environment variable:
LOG_LEVEL=DEBUG # DEBUG, INFO, WARNING, ERROR- Frontend (React/Next.js)
- Persistent storage (PostgreSQL/Redis)
- User authentication
- Rate limiting
- Multiple LLM provider support
- Post scheduling integration
- Analytics and metrics
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT
- Built with FastAPI
- Powered by Google Gemini
- Orchestrated with LangGraph