Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
93 changes: 93 additions & 0 deletions libs/partners/ollama/test_output_parser_issue.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
"""
Python reproduction of OutputParserException issue with Ollama
Original issue from JavaScript: OutputParserException with empty text parsing
"""

from langchain_ollama import ChatOllama
from langchain_core.prompts import ChatPromptTemplate
from pydantic import BaseModel, Field
from typing import Literal

# Initialize Ollama (equivalent to JavaScript version)
llm = ChatOllama(
model="llama3.2:3b",
temperature=0,
base_url="http://localhost:11434"
)

# Define the classification schema (equivalent to Zod schema)
class ClassificationSchema(BaseModel):
"""Extract sentiment, aggressiveness, and language from text"""

sentiment: Literal["happy", "neutral", "sad"] = Field(
description="The sentiment of the text"
)
aggressiveness: int = Field(
description="Describes how aggressive the statement is on a scale from 1 to 5. The higher the number the more aggressive"
)
language: Literal["spanish", "english", "french", "german", "italian"] = Field(
description="The language the text is written in"
)


# Create the tagging prompt
tagging_prompt = ChatPromptTemplate.from_template(
"""Extract the desired information from the following passage.

Passage:
{input}
"""
)

# Create LLM with structured output
llm_with_structured_output = llm.with_structured_output(
ClassificationSchema,
method="function_calling" # or try "json_schema"
)

# Test input (Spanish text)
test_input = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"

print("Testing Ollama structured output with Spanish text...")
print(f"Input: {test_input}\n")

try:
# Format the prompt
prompt = tagging_prompt.invoke({"input": test_input})

# Get structured output
result = llm_with_structured_output.invoke(prompt)

print("✓ SUCCESS!")
print(f"Result: {result}")
print(f"\nParsed values:")
print(f" - Sentiment: {result.sentiment}")
print(f" - Aggressiveness: {result.aggressiveness}")
print(f" - Language: {result.language}")

except Exception as e:
print(f"✗ FAILED: {type(e).__name__}")
print(f"Error: {e}")
print("\nThis is the OutputParserException the fix addresses!")


# Additional test to show the fix working
print("\n" + "="*60)
print("Testing with different methods:")
print("="*60)

methods = ["function_calling", "json_schema"]

for method in methods:
print(f"\nMethod: {method}")
try:
llm_test = llm.with_structured_output(
ClassificationSchema,
method=method
)
result = llm_test.invoke(
tagging_prompt.invoke({"input": test_input})
)
print(f" ✓ Success: {result}")
except Exception as e:
print(f" ✗ Failed: {type(e).__name__}: {str(e)[:100]}")
126 changes: 126 additions & 0 deletions libs/partners/sarvam/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
Pipfile.lock

# PEP 582
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Poetry
poetry.lock
9 changes: 9 additions & 0 deletions libs/partners/sarvam/.mypy.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
[mypy]
disallow_untyped_defs = True
ignore_missing_imports = True
exclude = tests/
explicit_package_bases = True
namespace_packages = True

[mypy-tests.*]
disallow_untyped_defs = False
19 changes: 19 additions & 0 deletions libs/partners/sarvam/.ruff.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
[lint]
select = [
"E", # pycodestyle errors
"F", # pyflakes
"I", # isort
"T201", # print statements
"UP", # pyupgrade
]

ignore = [
"E501", # line too long (handled by formatter)
]

[lint.per-file-ignores]
"tests/*" = ["T201"] # Allow print in tests
"scripts/*" = ["T201"] # Allow print in scripts

[lint.isort]
known-first-party = ["langchain_sarvam"]
142 changes: 142 additions & 0 deletions libs/partners/sarvam/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# Contributing to langchain-sarvam

Thank you for your interest in contributing to langchain-sarvam! This document provides guidelines for contributing to this package.

## Development Setup

1. Clone the repository and navigate to the package directory:
```bash
cd libs/partners/sarvam
```

2. Install dependencies using Poetry:
```bash
poetry install --with test,lint,typing,dev
```

3. Set up your Sarvam API key for testing:
```bash
export SARVAM_API_KEY="your-api-key"
```

## Running Tests

### Unit Tests
```bash
make test
# or
poetry run pytest tests/unit_tests
```

### Integration Tests
Integration tests require a valid Sarvam API key:
```bash
make integration_tests
# or
poetry run pytest tests/integration_tests
```

### All Tests
```bash
poetry run pytest tests/
```

## Code Quality

### Linting
```bash
make lint
```

This will run:
- `ruff` for code style checking
- `mypy` for type checking

### Formatting
```bash
make format
```

This will automatically format your code using `ruff`.

### Spell Checking
```bash
make spell_check
# To automatically fix spelling issues:
make spell_fix
```

## Before Submitting a PR

1. **Run all tests**: Ensure all unit and integration tests pass
```bash
poetry run pytest tests/
```

2. **Run linting**: Fix any linting errors
```bash
make lint
make format
```

3. **Check imports**: Verify imports are correct
```bash
make check_imports
```

4. **Update documentation**: If you've added new features, update:
- README.md
- Docstrings in the code
- Example notebooks if applicable

## Code Style Guidelines

- Follow PEP 8 style guidelines
- Use type hints for all function parameters and return values
- Write descriptive docstrings for all public methods and classes
- Keep functions focused and single-purpose
- Add comments for complex logic

## Testing Guidelines

- Write unit tests for all new functionality
- Ensure tests are isolated and don't depend on external services (use mocking)
- Integration tests should be marked with `@pytest.mark.scheduled`
- Test edge cases and error conditions

## Pull Request Process

1. Fork the repository and create a new branch for your feature
2. Make your changes following the guidelines above
3. Ensure all tests pass and code is properly formatted
4. Update documentation as needed
5. Submit a pull request with a clear description of changes

## Common Issues

### CI/CD Failures

If you encounter CI failures:

1. **Linting errors** (`lint` job failing):
- Run `make lint` locally
- Fix any `ruff` or `mypy` errors
- Run `make format` to auto-format code

2. **Test failures**:
- Run tests locally: `poetry run pytest`
- Check if API key is properly set for integration tests
- Review test output for specific failures

3. **Import errors**:
- Verify all imports are from allowed packages
- Run `make check_imports`

## Questions or Problems?

If you have questions or run into issues:
- Check existing GitHub issues
- Create a new issue with a clear description
- Join the LangChain Discord community

Thank you for contributing!
Loading
Loading