All Case Studies
Abstrabit Technologies
Simplifying operations as you scale
Case Study · MCP Integration
Freight & Logistics

3 Siloed Systems.
One Conversation.

120-person Dallas freight brokerage. 40–60 daily context switches replaced by a single AI interface — three custom MCP servers, 6 weeks.

40×
Faster shipment queries
6 hrs
Recovered daily
< 2 min
Route change propagation
< 5 min
Customer response time
Industry
Freight Brokerage
Location
Dallas, Texas
Team Size
120+ employees
Engagement
6 weeks
The Problem

Every Answer Required
3 Systems, Manually.

Custom TMS
.NET + SQL Server
127 tables · No API · No docs
Shipment Tracking DB
PostgreSQL + REST
Live GPS & ETAs · Tribal knowledge only
3 Google Sheets
Rates · SLAs · Pricing
Hidden dependencies · "Source of truth" by default
▼ manual tab switch ▼ copy + paste ▼ cross-reference
40–60 CONTEXT SWITCHES / SHIFT
Dispatcher
Manually assembling every answer
6 hrs
Lost daily to zero-value data shuffling
15–20m
Per route change across 3 systems
0 APIs
No vendor integrations. Build from scratch.
Solution Architecture

Four Layers.
Built from Scratch.

User Interface
Dispatcher Chat Interface
Natural language · No system knowledge required
Claude · Anthropic API
↓ orchestrates composite queries across all three systems
Orchestration Layer
Composite Tool Orchestration
Tool sequencing · Partial-failure handling · Structured JSON
get_shipment_full_statusTMS + tracking + SLA
update_route_and_notifyroute change + email
daily_ops_briefingall loads + risk flags
↓ each composite tool sequences multiple MCP server calls
MCP Servers — 3 Custom-Built (Python MCP SDK · stdio)
TMS MCP Server
12 read tools · pyodbc · SQL parameterization
Python MCP SDK
Shipment DB Server
GPS · ETAs · Exceptions · Redis cache 30s TTL
asyncpg + Redis
Sheets MCP Server
Carrier rates · Lane pricing · SLA thresholds
Sheets API v4
↓ read/write via native protocols
Data Layer
Custom TMS
.NET / SQL Server · 127 tables · Schema reverse-engineered
No API · No docs
Shipment Tracking DB
PostgreSQL · Carrier REST · Live GPS
Tribal knowledge documented
Google Sheets (×3)
Rates · Pricing · SLAs
Service account auth
Measured Outcomes

Before vs After.
Every Number Verified.

Metric
Before
After
Shipment status query
4–6 minutes
3 tabs, manual cross-reference
< 15 seconds
24× faster
Daily ops overhead
~6 hours / day
Team-wide data shuffling
Near zero
6 hrs recovered
Route change propagation
15–20 minutes
3 manual system updates
< 2 minutes
10× faster
SLA breach detection
Reactive
Discovered after the breach
Proactive
Daily briefing
Customer response time
30+ minutes
Manual lookup + email
< 5 minutes
6× faster
30+
Hours recovered per week across the ops team
6 wk
From kick-off to full production
0
Vendor APIs existed. Entirely custom build.
Technical Proof

Production-Grade
in Every Detail.

dispatcher-ops · freight-ai
What's the status on order #4829 and are we going to hit the SLA?
▸ get_shipment_full_status · 3 tools · 11s
Order:#4829 — Chicago → Dallas
Carrier:Werner Enterprises
Status:In Transit · On Route ✓
GPS:Joplin, MO · 4 min ago
ETA:Today 4:42 PM CST
SLA:⚠ Closes 5:00 PM — 18 min buffer
Note:Recommend proactive customer update
Draft the customer update email.
▸ update_route_and_notify · awaiting confirmation
Draft ready. Confirm to send →
Subject: Shipment #4829 — ETA Update
Your shipment is on schedule for 4:42 PM today...
Production-Quality Signals
Per-user rate limits via API key on every server
Role-scoped: dispatchers read-only · managers modify
All writes require explicit confirmation before execution
Full audit log to AWS CloudWatch
Redis cache (30s TTL) prevents rate overload
SQL parameterization — injection prevention
Graceful degradation — partial data with warnings
Full Stack
Python 3.11MCP SDKpyodbc asyncpgRedisSheets API v4 AWS EC2CloudWatchDocker Anthropic APISQL ServerPostgreSQL
The Insight That Drove Results
Dispatchers don't think in "TMS data" vs "tracking data." They think in shipments. MCP let us match the AI to how dispatchers actually work — one question, one complete answer.