How to use LangChain and LangGraph for Agentic AI
A step-by-step guide on how to build a context-aware agent that fetches real-time data, and deploy it in real-world use cases.
Feb 24, 2025 • 6 Minute Read

Artificial Intelligence (AI) has advanced to a point where we can create systems that think and act autonomously. These systems, referred to as Agentic AI, are capable of reasoning, decision-making, and tool usage in real-world tasks. Imagine a system that not only provides weather updates but also remembers what you’ve previously asked and suggests related information—this is Agentic AI in action.
However, building such systems isn’t straightforward. Large Language Models (LLMs) like OpenAI’s GPT are powerful but limited. They excel at generating text but lack memory and cannot interact directly with external tools or systems. This is where frameworks like LangChain and LangGraph shine, enabling developers to add memory to AI models to retain context across interactions, integrate tools like APIs for real-time data retrieval, and create workflows for complex, multi-step tasks.
This article will guide you through:
- Understanding the limitations of LLMs and transformers.
- Exploring how LangChain and LangGraph solve these limitations.
- Building a weather chatbot agent capable of memory retention and API integration.
- Expanding the system for production use and comparing it to alternatives.
Foundational concepts: Large Language Models and Transformers
What are LLMs?
Large Language Models (LLMs), such as OpenAI’s GPT, are trained on vast datasets of human language. They are designed to predict the next word in a sequence, which enables them to generate coherent text, answer questions intelligently, and perform complex reasoning tasks, like writing code or summarizing documents.
For example, GPT-3 can generate an entire essay when prompted with a single sentence. However, while LLMs are intelligent, they operate in a stateless manner—processing each interaction as if it’s the first.
Transformers: The brain behind LLMs
Transformers are the neural network architecture powering LLMs. Introduced in the 2017 paper “Attention Is All You Need”, transformers use self-attention mechanisms to understand the relationships between words in a sequence. Imagine reading a novel. Instead of reading linearly, you flip back and forth between pages to understand the plot better. Transformers do the same, focusing on the most relevant parts of the input to generate meaningful outputs.
For a deeper dive into the inner workings of transformers, check out Kesha William's article, "What are transformers in Generative AI?"
Limitations of LLMs
Despite their strengths, LLMs have several shortcomings:
- No Memory: LLMs cannot recall past conversations or build upon earlier inputs.
- No Tool Integration: They cannot fetch real-time data or interact with APIs.
- Workflow Gaps: They lack the ability to perform multi-step reasoning or coordinate complex tasks.
These gaps are significant when building AI systems meant for dynamic, real-world applications. For instance, a chatbot without memory cannot provide context-aware responses, and an LLM without tool integration cannot retrieve live weather updates.
Introducing LangChain and LangGraph
LangChain: Adding superpowers to LLMs
LangChain is a Python framework that extends the capabilities of LLMs. It enables developers to:
- Add memory: Retain conversation history and context over time.
- Integrate tools: Enable LLMs to interact with external APIs, databases, or functions.
- Create modular workflows: Build sequences of tasks, like asking a question, retrieving data, and generating a response.
LangChain can be thought of as the brain’s executive assistant. It keeps track of what has already been said or done, it knows when to call external tools (e.g., fetching weather data), and it manages tasks in an organized and modular way, ensuring that each step leads to the desired output.
For more information on LangChain fundamentals, check out Amber Israelsen's article: "Getting started with LangChain: How to run your first application."
LangGraph: Visualizing workflows
While LangChain executes workflows, LangGraph provides a way to visualize and debug them. It represents workflows as directed graphs, with nodes (Representing tasks like fetching data or generating text) and edges (Representing dependencies between tasks.)
LangGraph acts as a blueprint architect:
It visually maps out tasks and their relationships.
It simplifies debugging by showing how data flows through the system.
It ensures scalability by making workflows modular and adaptable.
For example, if LangChain organizes a workflow for fetching weather data, LangGraph shows the steps as a graph: user input → fetch weather → generate response.
Building a weather chatbot agent
Now that you know what LangChain and LangGraph are, let's get into the actual hands-on learning! Following the steps below, we're going to build an agent that meets the following criteria:
Retains memory to provide context-aware responses.
Fetches real-time weather data using OpenWeatherMap’s API.
Combines LangChain tools and workflows for seamless interaction.
Step 1: Setting up LangChain
We’ll initialize OpenAI’s GPT model and LangChain’s memory system.
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
# OpenAI API Key
openai_api_key = "YOUR_OPENAI_API_KEY"
# Initialize the chat model
llm = ChatOpenAI(model="gpt-3.5-turbo", openai_api_key=openai_api_key)
# Add memory to retain conversation context
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
So, what's going on here? Let me explain:
ChatOpenAI: Connects to OpenAI’s GPT model for natural language generation.
ConversationBufferMemory: Stores the user-agent conversation history to provide context across multiple turns.
Step 2: Fetch Weather Data
Use OpenWeatherMap’s APIs to retrieve weather information.
import requests
OPENWEATHER_API_KEY = "YOUR_OPENWEATHER_API_KEY"
def get_city_coordinates(city: str):
url = f"http://api.openweathermap.org/geo/1.0/direct?q={city}&appid={OPENWEATHER_API_KEY}"
response = requests.get(url)
data = response.json()
if data:
return data[0]["lat"], data[0]["lon"]
raise ValueError("City not found.")
def get_weather(city: str):
lat, lon = get_city_coordinates(city)
url = f"https://api.openweathermap.org/data/3.0/onecall?lat={lat}&lon={lon}&appid={OPENWEATHER_API_KEY}&units=metric"
response = requests.get(url)
data = response.json()
temp = data["current"]["temp"]
condition = data["current"]["weather"][0]["description"]
return f"The weather in {city} is {temp}°C with {condition}."
Step 3: Wrap weather functions into a tool
LangChain’s Tool class integrates external functions for seamless interaction.
from langchain.tools import Tool
weather_tool = Tool(
name="Weather Fetcher",
func=get_weather,
description="Fetches the current weather for a specified city."
)
Step 4: Initialize the LangChain Agent
We combine the tools, LLM, and memory into a cohesive agent.
from langchain.agents import initialize_agent, AgentType
tools = [weather_tool]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
verbose=True
)
Step 5: Deploy the chatbot with FastAPI
Deploy the agent as an API service using FastAPI.
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
app = FastAPI()
class ChatRequest(BaseModel):
message: str
@app.post("/chat")
def chat(request: ChatRequest):
try:
response = agent.run(request.message)
return {"response": response}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Now, run the server:
uvicorn app:app --reload
How to expanding the agent for real-world use
Now that we've built our demo, how would we make this something we could deploy in an actual use case? Here's some steps I would suggest:
Add more tools
Stock API: Fetch real-time stock prices.
Task Scheduler: Automate reminders and notifications.
Document Summarizer: Summarize lengthy articles or PDFs.
Scale the Infrastructure
Use Redis or PostgreSQL for memory persistence.
Deploy with Docker for portability and Kubernetes for scalability.
LangChain and LangGraph vs Haystack vs LlamaIndex
- LangChain and LangGraph are best for combining memory, tool integration, and modular workflows. LangGraph's visualization ensures scalability and debugging.
- Haystack is best for document retrival and Q&A. It has limited support for tool integration.
- LlamaIndex is ideal for indexing large datasets, and is not optimized for agentic workflows.
Real-world applications
Ultimately, AI is only as good as the actual business problems they solve. Here are some use cases you might use this solution for:
Customer Support: Context-aware bots for resolving queries.
E-Commerce: Personalized recommendations and assistance.
Research Agents: Automating data summarization and insight extraction.
Conclusion
LangChain and LangGraph empower developers to create intelligent, modular, and scalable AI systems. By integrating memory, tool usage, and workflow visualization, these frameworks unlock new possibilities for building dynamic, real-world applications. From weather chatbots to autonomous research assistants, LangChain and LangGraph provide the foundation to move beyond static AI and into the realm of adaptive, interactive systems.
This comprehensive guide showed how to create a fully functional weather chatbot agent that combines the strengths of OpenAI's GPT, LangChain, and FastAPI. By leveraging these tools, developers can expand their projects to include features like stock tracking, personalized recommendations, and task automation.
The future of AI lies in systems that not only think but act intelligently—and frameworks like LangChain and LangGraph are leading the way. Start building your own Agentic AI systems today and be part of the transformation!
Want to learn more about LLMs, how they work, and how to apply them in practical, real world scenarios? Check out Pluralsight's learning path on LLMs, including Axel Sirota's course on NLPs and transformer models. For more specifically on Agentic Frameworks such as LangChain, CrewAI, and AutoGen, check out Brian Letort's course on Agentic Frameworks.
Further AI tutorials by this author
- Bringing AI on-prem: How to use local models in LangChain
- Vector Databases: Building a local LangChain store in Python
- What is RAG: Definition, use cases, and how to implement it
- LLMs: Transfer Learning with TensorFlow, Keras, Hugging Face
- Ethical AI: How to make an AI with ethical principles using RLHF
- How to deploy an LLM for production use cases
- How to create a GenAI powered real-time data processing solution
- Creating a large language model from scratch: A beginner's guide