Skip to main content

config.yaml Reference

config.yaml is the main configuration file for the perpetual futures agent. Copy from the example:

cp config.yaml.example config.yaml

LLM Configuration

llm:
client_type: langchain_nvidia # openai | cloudflare | google | litellm | nvidia
model: deepseek-ai/deepseek-v3.2
temperature: 0.2 # low temperature recommended for trading decisions

Supported client_type Values

ValueProviderNotes
openaiOpenAIRequires OPENAI_API_KEY
langchain_nvidiaNVIDIA NIMRequires NVIDIA_API_KEY
googleGoogle GeminiRequires GOOGLE_API_KEY
cloudflareCloudflare Workers AIRequires CLOUDFLARE_* vars
litellmLiteLLM proxyFlexible multi-provider routing
Temperature

Keep temperature at 0.10.3 for trading decisions. Higher values produce more creative but less reliable JSON outputs.

Trading Parameters

trading:
symbols: [BTC, ETH] # use simple symbols, NOT pair format (not BTC/USDT)
max_trade_amount: 100 # maximum USD per single trade
max_leverage: 10 # maximum leverage multiplier
limit_order_enabled: false # use limit orders for entry (vs. market orders)
Symbol Format

Use BTC, ETHnot BTC/USDT or BTC-PERP. Hyperliquid uses simple asset symbols.

Leverage Notes

  • Different assets have different maximum leverage on Hyperliquid
  • If max_leverage exceeds the asset's limit, orders will be rejected
  • Start conservative: max_leverage: 3 or 5 until you're confident in the strategy

Scheduler

scheduler:
interval_minutes: 3 # fallback polling interval between decisions

The scheduler is the fallback decision loop. When Market Monitor is enabled, volatility spikes trigger decisions immediately without waiting for the interval.

Prompt Strategy

prompt:
set: nof1-improved # see table below
ValueDescription
defaultStandard FinCoT reasoning chain
conservativeHold on any trend divergence, requires R:R ≥ 2.0
aggressiveEnters on 3 conditions, accepts R:R ≥ 1.2
nof1Enhanced strategy with full FinCoT integration
nof1-improvedRecommended — complete FinCoT + enhanced data integration
realtimePrioritizes price action over lagging indicators
realtime-engEnglish version of realtime

Enhanced Analysis

enhanced_analysis:
enabled: true # enables CEX funding rate, on-chain data collection

This is the base switch for CEX Signals and Regime Adaptive. Other features that depend on this:

enhanced_analysis.enabled: true
├── debate.enabled: true # independent switch
└── regime_adaptive.enabled: true # depends on enhanced_analysis

Debate Agent

debate:
enabled: false # adds 2 extra LLM calls per decision cycle

See Bull/Bear Debate for details.

Regime Adaptive

regime_adaptive:
enabled: false
# Optional parameter overrides:
# trending:
# signal_threshold: 0.5
# min_confidence: 0.35
# max_leverage: 10
# ranging:
# signal_threshold: 0.75
# max_leverage: 5
# volatile:
# signal_threshold: 0.85
# max_leverage: 3

See Market Regime Adaptive for details.

Account Protection

account_protection:
enabled: true
max_drawdown_pct: 0.10 # pause trading if account drops 10% from peak
max_daily_loss_pct: 0.05 # pause trading if daily loss exceeds 5%
max_position_hours: 48 # force-close positions held longer than 48 hours
Always Keep This Enabled

account_protection is a critical safety mechanism. Only disable it if you fully understand the risks.

Market Monitor

market_monitor:
enabled: false
check_interval_seconds: 30 # price check frequency
alert_threshold_pct: 3.0 # HIGH alert threshold
elevated_threshold_pct: 1.5 # ELEVATED threshold (logged, no trigger)
extreme_threshold_pct: 5.0 # EXTREME threshold
cooldown_minutes: 5 # cooldown after a trigger
reference_window_minutes: 10 # price baseline window

See Market Active Monitoring for alert levels and behavior.

Review Agent

review_agent:
# 6a: Dual-granularity reflection
instant_reflection_enabled: false # after every closed position
weekly_reflection_enabled: false # weekly LLM-based strategy review
weekly_reflection_day: 0 # 0=Monday
weekly_reflection_hour: 8

# 6b: Regime-aware memory
regime_aware_enabled: false
regime_mismatch_factor: 0.4 # weight penalty for regime mismatch

# 6c: Confirmation bias protection
bias_protection_enabled: false
max_positive_ratio: 0.7 # max ratio of positive lessons in memory
negative_confidence_boost: 1.15 # boost factor for negative lessons

# 6d: Fact-subjective split
fact_subjective_split_enabled: false
trending_subjective_boost: 1.3
ranging_factual_boost: 1.3

# 6e: Prompt meta-reflection
prompt_meta_reflection_enabled: false
prompt_optimization_dir: "logs/prompt_optimization"

See Review & Reflection System for all 6 enhancements.

Data Configuration

data:
timeframe: 1h # OHLCV candle timeframe for main decisions

Complete Example

llm:
client_type: langchain_nvidia
model: deepseek-ai/deepseek-v3.2
temperature: 0.2

trading:
symbols: [BTC, ETH]
max_trade_amount: 100
max_leverage: 10
limit_order_enabled: false

scheduler:
interval_minutes: 3

data:
timeframe: 1h

prompt:
set: nof1-improved

enhanced_analysis:
enabled: true

debate:
enabled: true

regime_adaptive:
enabled: true

account_protection:
enabled: true
max_drawdown_pct: 0.10
max_daily_loss_pct: 0.05
max_position_hours: 48

market_monitor:
enabled: true
check_interval_seconds: 30
alert_threshold_pct: 3.0
cooldown_minutes: 5

review_agent:
instant_reflection_enabled: true
weekly_reflection_enabled: true
regime_aware_enabled: true
bias_protection_enabled: true
fact_subjective_split_enabled: true
prompt_meta_reflection_enabled: false