Contributing to LLM-as-a-Judge Skills
Thank you for your interest in contributing! This project is part of the Agent Skills for Context Engineering ecosystem.
How to Contribute
Reporting Issues
- Check existing issues first
- Provide clear reproduction steps
- Include test output if applicable
Adding New Tools
- Create the implementation in
src/tools/<category>/<tool-name>.ts
- Define input/output Zod schemas
- Implement execute function with error handling
- Include proper TypeScript types
- Export from index in
src/tools/<category>/index.ts
- Add documentation in
tools/<category>/<tool-name>.md
- Purpose and when to use
- Input/output specifications
- Example usage
- Write tests in
tests/
- Unit tests for schema validation
- Integration tests with real API calls
Code Style
- Run
npm run lintbefore committing - Run
npm run formatfor consistent formatting - Use TypeScript strict mode
- Add JSDoc comments for public APIs
Pull Request Process
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Make your changes
- Run tests:
npm test - Commit:
git commit -m 'Add my feature' - Push:
git push origin feature/my-feature - Open a Pull Request
Testing Guidelines
- Tests run against real OpenAI API (requires API key)
- Use
60000mstimeout for single API calls - Use
120000mstimeout for multiple API calls - Tests should be deterministic despite LLM variance
Development Setup
# Clone
git clone https://github.com/muratcankoylan/llm-as-judge-skills.git
cd llm-as-judge-skills
# Install
npm install
# Configure
cp env.example .env
# Add your OPENAI_API_KEY to .env
# Build
npm run build
# Test
npm testQuestions?
Open an issue or reach out via the main repository.