Life Dashboard: From 500 Errors to Shortcuts API
The Starting Point
life-dashboard had a working audiomark pipeline and Plex integration from the previous session, but the rest of the app was held together with baling wire. The /log page 500’d. The /api/today/html endpoint 500’d. Six SQL queries were referenced in route handlers but never defined in queries.py. Two route modules existed but weren’t registered in main.py. The audiomark card renderer showed the wrong fields.
The goal was simple: make everything work.
The Audit
A systematic pass through every route module revealed the damage. The log.py and similar.py routers were imported nowhere. The queries they depended on — LOG_ENTRIES, LOG_COUNT, LOG_ENTRY_DETAIL, TODAY_SUMMARY, SEARCH_ENTRIES, CONNECTIONS — didn’t exist. The card renderers in html_fragments.py had field mismatches: audiomark cards tried to read highlights instead of transcript, journals showed nothing because they looked for text instead of entry_raw.
Six queries written. Two routers registered. Three card renderers rewritten. All 500 errors resolved.
Audiomark Gets Smarter
The previous session’s audiomark endpoint required the iOS Shortcut to send a JSON body. That meant the Shortcut had to know about Plex metadata, position, duration — information the server already had. Flipped it: bare POST /api/ingest/audiomark with no body. The server queries Plex for now-playing, extracts the clip, transcribes, and fills in everything.
Backfilled 37 existing audiomarks with location data (nearest prior event with coordinates) and 19 with real progress/duration from Plex history. The audiomark card now shows book title, author, position as HH:MM:SS, progress percentage, location, and the full transcript.
Card Renderers
The entry log needed to display 23 event types. Eight got dedicated card renderers:
- Audiomark: transcript text, book metadata, position/progress bar, location
- Journal: expandable raw entry text, mood/energy from health fields
- Highlight: quoted passage with source attribution
- Clipboard: content with schema version detection (v1 nested health, v2 flat)
- Lifetracker: key-value pairs from the payload
- Bookmark: URL with title, expandable description
- Location: coordinates with reverse-geocoded place name
- Default: generic payload dump for everything else
Shortcuts API
The real payoff. Five endpoints designed for iOS Shortcuts piping into ChatGPT:
GET /api/shortcuts/recent— JSON, last 50 eventsGET /api/shortcuts/reading— JSON, current reading stateGET /api/shortcuts/journal— JSON, recent journal entriesGET /api/shortcuts/today— plain text summary for LLM contextGET /api/shortcuts/last24h— plain text 24-hour digest
The plain text endpoints return pre-formatted summaries that Shortcuts can pass directly to a ChatGPT action. No JSON parsing in Shortcuts (which is painful), no prompt engineering on the phone. The server does the formatting.
Safety Mechanisms
While fixing ingest, hardened the pipeline:
- Rate limiting per endpoint type (audiomark 15s, highlight 5s, lifetracker 3s)
- Async lock preventing concurrent ffmpeg+whisper runs
- Dedup: same book within 30s, same text within 5min
- Idempotency keys via
X-Idempotency-Keyheader with 5min TTL - Input sanitization stripping control characters
- All safety responses return HTTP 200 for Siri compatibility (Siri treats non-200 as failures and retries)
What’s Next
The dashboard is now fully functional. Every endpoint works, the log page renders all event types, and iOS Shortcuts can pull data for LLM summarization. Next priorities: the globe visualization needs the same audit treatment, and the embedding background worker needs monitoring (it silently fails if Ollama is down).