Testing Overview

The testing directory contains scripts and prompts for testing the core modules, spoutlets, and prompts. Feel free to add your own tests to the project, using the provided examples as a guide.

Testing Directory Structure

testing/
├── core_testing/      # Core module validation
   ├── test_core.bat  # Single model testing
   ├── test_core.sh   
   ├── test_gamut.bat # Multi-model testing
   └── test_gamut.sh
├── generate_samples/  # Sample generation scripts
   ├── generate_core_samples.bat  # Windows sample generator
   └── generate_core_samples.sh   # Unix sample generator
├── prompt_runners/    # Prompt testing utilities  
   ├── prompt_runner.bat  # Single model testing
   ├── prompt_runner.sh
   ├── prompt_gamut.bat   # Multi-model testing
   └── prompt_gamut.sh
├── prompts/          # Test prompt collections
   ├── basic.txt     # Simple validation prompts
   ├── hard.txt      # Highly challenging prompts
   ├── medium.txt    # Medium complexity prompts
   └── AGI/          # list of text files containing prompts for testing AGI
└── tests/           # Test output directory

Test Types

Core Testing

Validates spout core modules and their spoutlets:

  • test all core modules with a single model
  • test all core modules with multiple models(as selected in the models.ini)
  • Use the .bat scripts for windows or the .sh scripts for unix

Generate Core Samples

Generates test data and example outputs for use as examples in chaining and scripts:

  • Use the .bat scripts for windows or the .sh scripts for unix

Prompt Testing

Tests model responses across different scenarios, to test out and compare new LLM models as they are released, and experiment with new prompt engineering techniques:

Adding Custom Tests

You can extend SPOUT's testing capabilities by adding your own test directories:

  1. Create a new directory under testing/:
testing/
└── my_custom_tests/     # Your test directory
    ├── runners/         # Test execution scripts
    ├── prompts/         # Test prompt files
    └── README.md        # Test documentation
  1. Add prompt files following these conventions:
  • Use .txt format for prompts
  • Group related prompts in subdirectories
  • Include descriptive names for the files
  • text files can contain one prompt or multiple prompts separated by a new line
  1. Create test runners:
  • Follow existing runner patterns
  • Support both Windows (.bat) and Unix (.sh)
  • Include proper error handling
  • Generate formatted output
Test results are automatically saved in the tests/ directory with timestamps for tracking and comparison.