Testing & Debugging
Ensuring code quality and diagnosing issues are crucial parts of development with JuliaOS. This comprehensive guide covers testing strategies, debugging techniques, and troubleshooting common issues across all components of the system.
Testing Strategies
Test Types
Unit Tests: Test individual functions and components in isolation
Integration Tests: Test interactions between components
End-to-End Tests: Test complete workflows from user input to final output
Performance Tests: Measure and validate system performance
Property-Based Tests: Generate random inputs to test properties of functions
Test Coverage
Aim for high test coverage across all components:
Core functionality should have near 100% coverage
Edge cases should be thoroughly tested
Error handling should be verified
Performance-critical paths should have benchmarks
Running Tests
Julia Tests
Unit and integration tests for the Julia backend are located in /julia/test
.
# Navigate to Julia directory
cd julia
# Run all tests
julia --project=. test/runtests.jl
# Run a specific test file
julia --project=. -e 'using Pkg; Pkg.test(test_args=["AgentSystem"])'
# Run tests with increased verbosity
julia --project=. -e 'using Pkg; Pkg.test(; test_args=["--verbose"])'
# Run tests with code coverage
julia --project=. -e 'using Pkg; Pkg.test(; coverage=true)'
Julia Test Organization
Tests in the Julia backend are organized by module:
/julia/test/test_agent_system.jl
: Tests for the agent system/julia/test/test_swarm_manager.jl
: Tests for swarm functionality/julia/test/test_bridge.jl
: Tests for bridge functionality/julia/test/test_storage.jl
: Tests for storage functionality/julia/test/test_wallet.jl
: Tests for wallet functionality
Node.js/TypeScript Tests
Tests for Node.js packages (CLI, framework, wallets, bridges) are organized by package and run using Jest.
# Run tests for all packages from the root directory
npm test
# Run tests for a specific package
cd packages/framework
npm test
# Run tests with coverage
npm test -- --coverage
# Run a specific test file
npm test -- src/agents.test.ts
# Run tests in watch mode (for development)
npm test -- --watch
TypeScript Test Organization
Tests in TypeScript packages follow this structure:
/packages/framework/tests/
: Tests for the frameworkagents.test.ts
: Tests for agent functionalityswarms.test.ts
: Tests for swarm functionalitybridge.test.ts
: Tests for bridge functionalitywallet.test.ts
: Tests for wallet functionality
Python Tests
Tests for the Python wrapper are located in /packages/python-wrapper/tests/
.
cd packages/python-wrapper
# Install test dependencies
pip install -e ".[test]"
# Run all tests
pytest
# Run tests with coverage
pytest --cov=juliaos
# Run a specific test file
pytest tests/test_agents.py
# Run tests matching a pattern
pytest -k "agent or swarm"
# Run tests with increased verbosity
pytest -v
Python Test Organization
Python tests are organized by module:
/packages/python-wrapper/tests/test_agents.py
: Tests for agent functionality/packages/python-wrapper/tests/test_swarms.py
: Tests for swarm functionality/packages/python-wrapper/tests/test_bridge.py
: Tests for bridge functionality/packages/python-wrapper/tests/test_wallet.py
: Tests for wallet functionality/packages/python-wrapper/tests/test_langchain.py
: Tests for LangChain integration
Writing Effective Tests
Julia Test Best Practices
# Example of a well-structured Julia test
using Test
using JuliaOS.AgentSystem
@testset "Agent Creation" begin
# Setup
config = Dict("name" => "TestAgent", "type" => "trading")
# Test function
agent = create_agent(config)
# Assertions
@test agent !== nothing
@test agent["name"] == "TestAgent"
@test agent["type"] == "trading"
@test haskey(agent, "id")
# Test error cases
@test_throws ArgumentError create_agent(Dict())
# Cleanup if needed
delete_agent(agent["id"])
end
TypeScript Test Best Practices
// Example of a well-structured TypeScript test
import { Agents } from '../src/agents';
import { JuliaBridge } from '../src/bridge';
describe('Agents', () => {
let bridge: JuliaBridge;
let agents: Agents;
// Setup before all tests
beforeAll(async () => {
bridge = new JuliaBridge({ host: 'localhost', port: 8052 });
await bridge.initialize();
agents = new Agents(bridge);
});
// Cleanup after all tests
afterAll(async () => {
await bridge.disconnect();
});
// Individual test
it('should create an agent', async () => {
const agent = await agents.createAgent({
name: 'TestAgent',
type: 'trading',
config: { risk_level: 'medium' }
});
expect(agent).toBeDefined();
expect(agent.name).toBe('TestAgent');
expect(agent.type).toBe('trading');
expect(agent.id).toBeDefined();
// Cleanup
await agents.deleteAgent(agent.id);
});
// Test error cases
it('should throw an error for invalid agent type', async () => {
await expect(agents.createAgent({
name: 'TestAgent',
type: 'invalid_type',
config: {}
})).rejects.toThrow();
});
});
Python Test Best Practices
# Example of a well-structured Python test
import pytest
from juliaos import JuliaOS
@pytest.fixture
async def client():
"""Create a JuliaOS client for testing."""
client = JuliaOS(host="localhost", port=8052)
await client.connect()
yield client
await client.disconnect()
@pytest.mark.asyncio
async def test_create_agent(client):
"""Test creating an agent."""
# Create an agent
agent = await client.agents.create_agent(
name="TestAgent",
agent_type="trading",
config={"risk_level": "medium"}
)
# Assertions
assert agent is not None
assert agent.name == "TestAgent"
assert agent.type == "trading"
assert agent.id is not None
# Cleanup
await client.agents.delete_agent(agent.id)
@pytest.mark.asyncio
async def test_invalid_agent_type(client):
"""Test error handling for invalid agent type."""
with pytest.raises(Exception):
await client.agents.create_agent(
name="TestAgent",
agent_type="invalid_type",
config={}
)
Debugging
Julia Backend
Logging
JuliaOS uses the Julia Logging
module for structured logging. Logs are written to:
/julia/logs/server.log
: Main server log/julia/logs/error.log
: Error log/julia/logs/debug.log
: Debug log (when debug logging is enabled)
To configure logging levels, edit /julia/config/config.toml
:
[logging]
level = "info" # Options: "debug", "info", "warn", "error"
file = true # Write logs to file
console = true # Write logs to console
Using the Julia Debugger
JuliaOS supports debugging with Debugger.jl
:
# Start the Julia REPL
julia --project=.
# Load the debugger
using Debugger
# Load the module you want to debug
using JuliaOS.AgentSystem
# Set a breakpoint and run a function
@enter create_agent(Dict("name" => "TestAgent", "type" => "trading"))
# Debugger commands:
# n - step to next line
# s - step into function call
# finish - run until current function returns
# c - continue execution
# q - quit debugging
# bt - show backtrace
Remote Debugging
For debugging the running server:
Start the server with debugging enabled:
julia --project=. -e 'ENV["JULIA_DEBUG"] = "all"; include("julia_server.jl")'
Connect to the running process using VS Code Julia extension or another IDE with Julia debugging support.
Node.js/TypeScript Packages
Console Logging
Use structured logging with different levels:
// Basic logging
console.log('Info message');
console.warn('Warning message');
console.error('Error message');
// Structured logging with context
console.log('Processing agent:', { agentId, agentType, operation });
Using Node Inspector
Debug Node.js applications using the built-in inspector:
# Start the CLI with inspector
node --inspect packages/cli/interactive.cjs
# Start with inspector and break on start
node --inspect-brk packages/cli/interactive.cjs
Then connect using:
Chrome DevTools: Navigate to
chrome://inspect
and click "Open dedicated DevTools for Node"VS Code: Use the JavaScript Debug Terminal or create a launch configuration
VS Code Launch Configuration
Create a .vscode/launch.json
file:
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Debug CLI",
"program": "${workspaceFolder}/packages/cli/interactive.cjs",
"skipFiles": ["<node_internals>/**"],
"outFiles": ["${workspaceFolder}/packages/cli/dist/**/*.js"],
"console": "integratedTerminal"
}
]
}
Python Wrapper
Python Logging
Use Python's built-in logging module:
import logging
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger('juliaos')
# Log at different levels
logger.debug('Debug message')
logger.info('Info message')
logger.warning('Warning message')
logger.error('Error message')
Using Python Debugger
Use the built-in pdb
module:
import pdb
# Set a breakpoint
pdb.set_trace()
# Or use the breakpoint() function in Python 3.7+
breakpoint()
VS Code Python Debugging
Create a .vscode/launch.json
file:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": false
}
]
}
Troubleshooting Common Issues
Julia Backend Issues
Package Dependency Problems
Symptoms: UndefVarError
, missing modules, version conflicts
Solutions:
# Activate the project environment
julia --project=.
# Update the registry
using Pkg
Pkg.update()
# Resolve dependencies
Pkg.resolve()
# Instantiate the environment
Pkg.instantiate()
# If problems persist, try rebuilding
Pkg.build()
Server Connection Issues
Symptoms: Cannot connect to Julia server, connection refused
Solutions:
Verify the server is running:
ps aux | grep julia_server
Check the server port configuration in
.env
andconfig.toml
Ensure no firewall is blocking the connection
Check server logs for errors:
tail -f julia/logs/server.log tail -f julia/logs/error.log
Memory or Performance Issues
Symptoms: Slow response times, high memory usage, crashes
Solutions:
Check resource usage:
top -p $(pgrep -f julia_server)
Analyze garbage collection with logging:
ENV["JULIA_DEBUG"] = "GC"
Use the
Profile
module to identify bottlenecks:using Profile @profile run_function() Profile.print()
Node.js/TypeScript Issues
Bridge Connection Problems
Symptoms: Cannot connect to Julia backend, timeout errors
Solutions:
Verify the Julia server is running
Check connection settings in the bridge configuration:
const bridge = new JuliaBridge({ host: 'localhost', // Ensure this is correct port: 8052, // Ensure this matches the server port timeout: 30000 // Increase timeout if needed });
Check for network issues between client and server
TypeScript Compilation Errors
Symptoms: Build errors, type errors
Solutions:
Check for type errors:
npm run typecheck
Clean and rebuild:
npm run clean npm run build
Update dependencies if needed:
npm update
Python Wrapper Issues
Installation Problems
Symptoms: Import errors, missing dependencies
Solutions:
Reinstall the package in development mode:
pip uninstall juliaos pip install -e ./packages/python-wrapper
Check for Python version compatibility (requires Python 3.8+)
Install with all optional dependencies:
pip install -e "./packages/python-wrapper[all]"
Async Runtime Errors
Symptoms: Event loop errors, coroutine issues
Solutions:
Ensure proper async/await usage:
# Correct usage async def main(): client = JuliaOS() await client.connect() # ... await client.disconnect() # Run the async function import asyncio asyncio.run(main())
Check for mixing async and sync code incorrectly
Performance Profiling
Julia Profiling
# Load the Profile module
using Profile
# Clear any existing profile data
Profile.clear()
# Profile a function
@profile run_function()
# Print the profile data
Profile.print()
# For more detailed analysis, use ProfileView
using ProfileView
pdata = Profile.fetch()
ProfileView.view(pdata)
Node.js Profiling
# CPU profiling
node --prof packages/cli/interactive.cjs
# Generate a readable report
node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > profile.txt
# Memory profiling with Chrome DevTools
node --inspect packages/cli/interactive.cjs
# Then connect with Chrome DevTools and use the Memory tab
Python Profiling
# Using cProfile
python -m cProfile -o profile.prof your_script.py
# Analyze with snakeviz
pip install snakeviz
snakeviz profile.prof
# Memory profiling with memory_profiler
pip install memory_profiler
python -m memory_profiler your_script.py
Continuous Integration
JuliaOS uses GitHub Actions for continuous integration. The workflow includes:
Linting: Check code style and formatting
Type Checking: Verify TypeScript types
Unit Tests: Run tests for all components
Integration Tests: Test component interactions
Build Verification: Ensure all packages build correctly
To run CI checks locally before committing:
# Run all checks
npm run ci
# Run individual checks
npm run lint
npm run typecheck
npm run test
npm run build