SPOUT implements comprehensive API call logging to track interactions with language models, measure performance, and optimize resource usage. The logging system captures detailed metrics for each API call while maintaining data privacy through hashing.

Core Logging Features

Metrics Tracked

Each API call logs the following metrics:

  • Timestamp - When the call was initiated
  • Duration - Time taken to complete the call (in seconds)
  • Model ID - The language model used
  • Skill Name - Module and Spoutlet identifier
  • Token Usage - Input and output token counts
  • Input Hashes - Secure hashes of input parameters plus template
  • Output Hashes - Secure hashes of output string

Log Storage

Logs are stored in CSV format for easy analysis:

Start Time,Duration(s),Model Id,Skill Name,Input Tokens,Output Tokens,Input Hash,Output Hash
2024-03-21 14:30:22,1.234,gpt-3.5-turbo,generate:story^,150,320,a1b2c3d2,e4f5g6h3
Skill names are formatted as module:spoutlet with symbols indicating source:
  • ^ = Pro spoutlet
  • * = Local spoutlet
  • No symbol = Default spoutlet from the module_plugins folder

Configuration

Log File Location

Logs are stored in the SPOUT configuration directory:

spout/
└── config/
    └── api_metrics.csv

Metrics Collection

The logging system captures:

  1. Timing Metrics
    • Start time (YYYY-MM-DD HH:MM:SS format)
    • Duration in seconds (to 3 decimal places)
  2. Usage Metrics
    • Input token count
    • Output token count
    • Model identifier
  3. Content Security
    • Input content hash (MD5, 8 characters)
    • Output content hash (MD5, 8 characters)

Security

Content Hashing

  • Input/output strings are hashed using MD5 (8 characters)
  • Original content is never stored in logs but could be added

Integration with Modules

The logging system automatically wraps API calls through a decorator:

logged_invoke = self.api_metrics_logger.log_kernel_invoke(kernel.invoke)
      result = await logged_invoke(plugin[requested_spoutlet], **kwargs)
      output = str(result)

Automatic Detection

The logger automatically detects:

  • Model information
  • Token counts
  • Skill/module names
  • Source type (pro/local/default)

Analysis & Monitoring

Log Analysis

The CSV format enables easy analysis using standard tools:

  • Spreadsheet applications
  • Data analysis libraries
  • Log analysis tools

Key Metrics

Monitor important aspects like:

  • Average response times
  • Token usage patterns
  • Model performance
  • Module utilization

Best Practices

Regular Monitoring

  • Review logs periodically
  • Track usage patterns
  • Analyze performance trends
  • Identify duplicate inputs/outputs with hashes

Performance Optimization

  • Use logs to identify the fastest models for various jobs
  • Balance model selection
  • Monitor token efficiency

Conclusion

SPOUT's logging system provides comprehensive insights while maintaining privacy. Use these logs to optimize your workflow, monitor performance, and make informed decisions about resource usage.

Regular log analysis can help identify opportunities for optimization and improve overall system performance.