| #247 |
Per-activity circuit breaker configuration via DSL
Allow users to optionally configure circuit breaker per activity in DSL. Default: disabled (workflow...
|
closed |
medium |
2025-12-03 21:38 |
- |
|
| #234 |
Fix OpenAPI response serialization - UUIDs and response structure
Spectree response validation failed: UUIDs as objects, missing pagination field. Temp fix: removed r...
|
closed |
medium |
2025-12-02 08:44 |
- |
|
| #233 |
OpenAPI: SDK Generation and Documentation Portal
## Overview
Generate client SDKs and create documentation portal from OpenAPI spec.
## Requirements...
|
closed |
medium |
2025-12-02 08:16 |
- |
|
| #231 |
OpenAPI: Misc Endpoints (33 endpoints)
## Scope
Document remaining miscellaneous endpoints:
- `api/blueprints/v1/schemas.py` (5 endpoints)
...
|
closed |
medium |
2025-12-02 08:15 |
- |
|
| #230 |
OpenAPI: Replay & Debugging Endpoints (7 endpoints)
## Scope
Document replay and debugging endpoints:
- `api/blueprints/v1/replay.py` (3 endpoints)
- `a...
|
closed |
medium |
2025-12-02 08:15 |
- |
|
| #229 |
OpenAPI: Observability Endpoints (22 endpoints)
## Scope
Document observability and monitoring endpoints:
- `api/blueprints/v1/logs.py` (7 endpoints...
|
closed |
medium |
2025-12-02 08:15 |
- |
|
| #228 |
OpenAPI: Worker & Scheduling Endpoints (16 endpoints)
## Scope
Document worker management and scheduling endpoints:
- `api/blueprints/v1/workers.py` (9 en...
|
closed |
medium |
2025-12-02 08:14 |
- |
|
| #222 |
Replay: Add concurrent variable mutation handling
## Problem
Multiple parallel branches modifying ctx.set_variable() simultaneously. Last write wins (...
|
closed |
medium |
2025-12-02 05:23 |
- |
|
| #221 |
Replay: Add parent-child workflow reconciliation in reports
## Problem
No visual tree of parent/child workflows in replay output. No validation that child statu...
|
closed |
medium |
2025-12-02 05:23 |
- |
|
| #220 |
Replay: Add nested parallel branch replay support
## Problem
ReplayContext doesn't re-execute parallel_fork operator. Parallel branches shown in audit...
|
closed |
medium |
2025-12-02 05:23 |
- |
|
| #193 |
Engine: Introduce Ephemeral State ('ctx.temp_storage')
Introduce a mechanism ('ctx.temp_storage') for data that needs to exist during a transaction/task ex...
|
closed |
medium |
2025-11-30 05:01 |
- |
|
| #189 |
Docs: Add 'Reference Pattern' best practice for large data
Memory analysis reveals high overhead ('pass-by-value multiplier') when storing large objects in wor...
|
in-progress |
medium |
2025-11-30 05:01 |
- |
|
| #188 |
test_update_api.test_successful_update_processing times out
The test_successful_update_processing test in tests/integration/test_update_api.py fails with a 408 ...
|
closed |
medium |
2025-11-30 02:52 |
- |
|
| #171 |
Create SIMPLIFIED_FAQ.md tutorial with analogies for juniors and marketing
Create a beginner-friendly tutorial that explains Highway Workflow Engine using simple analogies. Ta...
|
closed |
medium |
2025-11-29 20:31 |
- |
|
| #170 |
Documentation: Docker Tool Tutorial
Write comprehensive tutorial for Docker tools (tools.docker.*) covering:
- Getting started with basi...
|
closed |
medium |
2025-11-29 19:16 |
- |
|
| #165 |
Consider adding tools.kafka for built-in Kafka producer/consumer support
## Proposal
Consider adding Kafka as a first-class tool in Highway, similar to tools.http, tools.ema...
|
closed |
medium |
2025-11-29 04:36 |
- |
|
| #149 |
investigate test_durable_cron.py intermittent failures - 6 tests failing with timeouts and state issues
|
closed |
medium |
2025-11-28 10:50 |
- |
|
| #136 |
datashard: Add scan() method to Table class for reading all data
DataShard lacks a method to read all data from a table. Currently only current_snapshot() is availab...
|
closed |
medium |
2025-11-27 11:59 |
- |
|
| #135 |
Health metrics timeline recorder script for test analysis
Create a script that continuously records health metrics to DataShard for timeline analysis during t...
|
closed |
medium |
2025-11-27 11:05 |
- |
|
| #106 |
datashard: Add gzip compression as default for parquet files
DataShard library (../datashard/) should use gzip compression by default when writing parquet files....
|
closed |
medium |
2025-11-27 00:01 |
- |
|