>_
.issue.db
/highway-workflow-engine
Dashboard
Issues
Memory
Lessons
Audit Log
New Issue
Edit Issue #517
Update issue details
Title *
Description
**File:** api/blueprints/v1/logs.py:90-119 **Problem:** `_read_table_to_pandas()` loads ENTIRE DataShard tables into memory on every request. With continuous workflow execution, these tables grow indefinitely. No pagination at DataShard level. **Severity:** CRITICAL **Fix:** - Implement LRU cache with TTL for table snapshots - Add predicate pushdown to DataShard queries - Implement streaming/chunked reading - Add memory limits per-query **Impact:** Production systems with high workflow volume will exhaust memory.
Priority
Low
Medium
High
Critical
Status
Open
In Progress
Closed
Won't Do
Due Date (YYYY-MM-DD)
Tags (comma separated)
Related Issues (IDs)
Enter IDs of issues related to this one. They will be linked as 'related'.
Update Issue
Cancel