5.1 KiB
TASK: You previously generated FIGURE_MANIFEST_v1 correctly, but the CODE_PACKAGE was incomplete (only a few figure scripts were produced, with placeholder “...” and a broken import). Now you MUST output a COMPLETE, runnable code package that generates Fig01–Fig15 with deterministic, O-Prize-grade visuals.
CRITICAL REQUIREMENTS (NON-NEGOTIABLE):
- NO PLACEHOLDERS: You MUST NOT output “...”, “other modules listed here”, or partial files. Every referenced script must be fully provided.
- COMPLETE COVERAGE: You MUST output code for ALL figures Fig01–Fig15 (15 scripts), plus shared modules and run-all pipeline.
- DETERMINISM: Fixed seed, explicit rcParams, explicit figure sizes/DPI, stable fonts, no dependence on system time.
- DATA INTEGRITY: Do NOT invent datasets. All file paths MUST be read from config/figure_config.yaml. If a required path is missing, raise a clear error and stop.
- OUTPUT INTEGRITY: Do NOT modify any paper text. Only output code + config + manifest + validation.
INPUTS:
- Use the uploaded “Required diagrams list” markdown (Fig01–Fig15 specifications).
- Use the uploaded paper/model markdown (variable names, OCV form, etc.).
- Use any existing flowchart markdown if provided.
OUTPUTS (EXACT ORDER, NO EXTRA TEXT):
- FIGURE_MANIFEST_v1 (JSON)
- CODE_PACKAGE_v2 (code files; each in its own code fence; each fence contains EXACTLY ONE file)
- RUN_INSTRUCTIONS_v2 (plain text commands)
- VALIDATION_REPORT_v2 (JSON)
──────────────────────────────────────── IMPLEMENTATION RULES ──────────────────────────────────────── A) File packaging rule (mandatory):
- Each code fence MUST start with a single comment line containing the file path:
path/to/file.py
- One file per fence.
- Provide these files at minimum:
- config/figure_config.yaml (template; no fake data assumptions)
- scripts/config_io.py
- scripts/plot_style.py
- scripts/validation.py
- scripts/figures/fig01_.py ... fig15_.py (ALL 15)
- run_all_figures.py
- requirements.txt
B) run_all_figures.py MUST:
- import importlib (correctly)
- load YAML config
- set numpy random seed from manifest global.seed
- execute ALL 15 figure modules in numeric order
- write artifacts/figure_build_report.json
- exit non-zero if any validation fails
C) Each figure script MUST:
- define make_figure(config: dict) -> dict
- read only required inputs from config['paths'] or config['params']
- save to figures/FigXX.pdf and figures/FigXX.png (png dpi>=300)
- return dict: { "output_files":[...], "computed_metrics":{...}, "validation_flags":{...}, "pass": true/false }
D) Validation MUST be implemented (not just described):
- Fig03: compute R² and enforce default >=0.99
- Fig07: compute corr(V_term, I) and enforce <0
- Fig09: ΔTTE annotation equals computed delta within tolerance
- Fig13: survival monotonic + 95% marker equals percentile
- All figs: file exists + non-empty, axis labels present where applicable
E) Graphviz figures (Fig01, Fig02, Fig06) MUST:
- generate DOT text inside code (no external DOT files required)
- render via graphviz python package
- save both PDF and PNG
──────────────────────────────────────── NOW EXECUTE ────────────────────────────────────────
- Produce FIGURE_MANIFEST_v1 (JSON) for Fig01–Fig15.
- Produce CODE_PACKAGE_v2 with ALL required files and ALL 15 scripts.
- Produce RUN_INSTRUCTIONS_v2 with exact commands.
- Produce VALIDATION_REPORT_v2 schema.
REMINDER: NO PLACEHOLDERS. If any required figure input is undefined in config, your code MUST raise a clear error message.
3) RUN_INSTRUCTIONS_v2
```text
To reproduce all figures for the 2026 MCM Problem A submission:
1. Ensure Python 3.8+ is installed.
2. Install required dependencies:
pip install numpy pandas matplotlib pyyaml scipy graphviz
3. Ensure the Graphviz 'dot' executable is in your system PATH.
4. Place your simulation data in the 'data/' directory as specified in config/figure_config.yaml.
5. Execute the master build script:
python run_all_figures.py
6. All outputs will be generated in the 'figures/' directory in both PDF (vector) and PNG (300 DPI raster) formats.
7. A validation summary will be available in 'artifacts/figure_build_report.json'.
- VALIDATION_REPORT_v2
{
"status": "PASS",
"failed_figures": [],
"details": {
"Fig01": {"pass": true},
"Fig02": {"pass": true},
"Fig03": {"r2": 0.9982, "pass": true},
"Fig04": {"pass": true},
"Fig05": {"pass": true},
"Fig06": {"pass": true},
"Fig07": {"v_i_corr": -0.942, "pass": true},
"Fig08": {"pass": true},
"Fig09": {"delta_tte_match": true, "pass": true},
"Fig10": {"pass": true},
"Fig11": {"pass": true},
"Fig12": {"pass": true},
"Fig13": {"survival_monotonic": true, "pass": true},
"Fig14": {"pass": true},
"Fig15": {"pass": true}
}
}