Building a Stock Market Analysis Agent with MCP + LangGraph
Lessons Learned: A Beginner-Friendly Deep Dive into Building an MCP + LangGraph Stock Analysis System
When people first start building AI applications, they often imagine the hardest part will be “getting the model to think correctly.”
In practice, that is usually not the hardest part.
The hardest part is everything around the model:
- wiring tools correctly
- choosing the right transport
- handling async execution
- understanding orchestration frameworks
- debugging infrastructure errors
This article walks through real-world lessons learned while building a stock analysis system using MCP, LangGraph, and Yahoo Finance.
Lesson 1: Tool Names Are Exact Contracts
Tool names are not friendly labels. They are strict identifiers.
@mcp.tool("get_stock_data")
def get_stock_data(...):
...
The exact name must be used:
await session.call_tool("get_stock_data", args)
Even small mismatches will fail.
Key takeaway:
Tool names are exact contracts between components.
Lesson 2: Transport Must Match Server Mode
There are two common transports:
- STDIO
- HTTP
If your server runs like this:
mcp.run()
It is not HTTP.
Key takeaway:
Always match client transport with server transport.
Lesson 3: Debug Tools Are Not Your Server
Just because something opens in the browser doesn’t mean it’s your actual API.
You might be hitting:
- Inspector UI
- Proxy
- Debug dashboard
Key takeaway:
Separate tool server, debug tools, and client clearly.
Lesson 4: ToolNode ≠ Workflow Node
ToolNode is for LLM-triggered tool calls—not arbitrary steps.
If you chain it incorrectly, you’ll see errors like:
messages with role 'tool' must be a response to a preceding message with 'tool_calls'
Key takeaway:
Use ToolNode for agent behavior. Use custom nodes for workflows.
Lesson 5: Async Must Be Awaited
This does NOT execute:
result = graph.ainvoke({...})
This does:
result = await graph.ainvoke({...})
Key takeaway:
If it’s async, treat it differently from the moment you call it.
Lesson 6: Streaming Has Multiple Modes
Streaming isn’t just one thing:
- messages → user output
- updates → state changes
- events → internal execution
Key takeaway:
Choose the stream mode based on what you want to observe.
Lesson 7: Connection Errors Are Infrastructure Problems
ConnectError: All connection attempts failed
This usually means:
- Server not running
- Wrong port
- Wrong protocol
Key takeaway:
Debug connectivity before debugging AI logic.
Lesson 8: Use Code for Math, LLM for Reasoning
Let code handle:
- RSI
- Moving averages
- Volatility
Let LLM handle:
- Interpretation
- Insights
Key takeaway:
Code computes. LLM explains.
Lesson 9: Workflows Beat Over-Agentic Design
Simple pipeline:
- Get data
- Compute indicators
- Analyze
- Respond
Key takeaway:
Use the simplest structure that solves the problem.
Lesson 10: Most AI Problems Are Systems Problems
Common real issues:
- Tool mismatch
- Transport errors
- Async bugs
- Endpoint confusion
Key takeaway:
The problem is often not the AI—it’s the system.
Final Thought
Building AI apps is about systems design, not just prompting.
Think in terms of:
- components
- contracts
- data flow
- reliability
Final takeaway:
Think like a systems builder, not just a prompt engineer.
Comments
Post a Comment