Skip to content
Go back

Building an Agent: The Art of Assembling the Right Building Blocks

Published:  at  10:00 AM
Available Languages:

Simple or complex, adapted to your needs.

AlphaGo vs Lee Sedol: a story of human-AI interaction

Introduction

Developing a high-performance AI agent is like building a house: you can’t just stack materials randomly. Each component has its role, strengths, and limitations. After more than 20 years of software development and extensive experimentation with AI, I’ve learned that a robust agent rests on six fundamental pillars: the languages and APIs that make it work, the orchestration that coordinates its actions, the models that give it intelligence, the telemetry that allows understanding its behavior, the storage that preserves its memory, and the runtime that determines where and how it executes.

This article guides you through these six domains, presenting the available tools. No marketing, just concrete experience feedback. Note: Non-exhaustive list.

Quick Navigation


1. Languages & API: The Foundations of Code

The choice of language and API frameworks determines development velocity, maintainability, and your agent’s performance. Here’s my toolkit.

Python {#python}

Description: Python is the reference language for AI, with the richest ecosystem of machine learning and data processing libraries.

external-link www.python.org

Advantages:

When to use: For complex data pipelines, model training, analysis scripts, and when you need Python’s scientific ecosystem.

↑ Back to navigation

TypeScript {#typescript}

Description: TypeScript is my language of choice for production agents. JavaScript with static typing, offering robustness and excellent developer experience.

external-link www.typescriptlang.org

Advantages:

↑ Back to navigation

FastAPI {#fastapi}

Description: FastAPI is a modern Python framework for creating ultra-fast RESTful APIs with automatic validation and interactive documentation.

external-link fastapi.tiangolo.com

Advantages:

↑ Back to navigation

Fastify {#fastify}

Description: Fastify is an ultra-performant Node.js web framework, focused on speed and low resource costs.

external-link www.fastify.io

Advantages:

↑ Back to navigation

Webhooks {#webhooks}

Description: Event-driven architecture for communicating between systems asynchronously and in a decoupled manner.

external-link en.wikipedia.org/wiki/Webhook

Advantages:

↑ Back to navigation

MCP (Model Context Protocol) / Tools {#mcp-model-context-protocol}

Description: MCP (Model Context Protocol) is an emerging protocol to standardize interaction between LLM models and external tools, developed by Anthropic.

external-link modelcontextprotocol.io

Advantages:

↑ Back to navigation

2. Orchestration: The Conductor of Your Agents

Orchestration determines how your agents coordinate their actions, manage complex workflows, and maintain coherence. It’s the brain of your multi-agent system.

N8N {#n8n}

Description: N8N is a no-code/low-code automation platform for creating visual workflows connecting hundreds of services.

external-link n8n.io

Advantages:

↑ Back to navigation

LangChain {#langchain}

Description: LangChain is a Python/TypeScript framework for developing LLM-driven applications with prompt chains and agents.

external-link www.langchain.com

Advantages:

↑ Back to navigation

LangGraph {#langgraph}

Description: LangGraph is a LangChain extension for creating agents with workflows as directed graphs, offering fine control over execution flows.

external-link langchain-ai.github.io/langgraph

Advantages:

↑ Back to navigation

CrewAI {#crewai}

Description: CrewAI is a Python framework for creating collaborative AI agent teams with defined roles, objectives, and processes.

external-link www.crewai.com

Advantages:

↑ Back to navigation

Pydantic AI {#pydantic-ai}

Description: Pydantic AI is a lightweight framework that uses Pydantic for data validation and structuring LLM outputs.

external-link ai.pydantic.dev

Advantages:

↑ Back to navigation

Custom Orchestration {#sur-mesure-custom-orchestration}

Description: Development of a custom orchestration system, tailored exactly to your specific needs. This is the approach of my Poulpikan framework.

Advantages:

↑ Back to navigation

3. Models: The Intelligence of Your Agents

The model you choose determines your agent’s cognitive capabilities. Each model has its strengths: some excel at reasoning, others at speed or visual understanding.

LLM (Large Language Models) {#llm-large-language-models}

Description: Generative language models capable of understanding and generating human text. The foundation of any conversational agent.

Advantages:

↑ Back to navigation

Vision {#vision}

Description: Models capable of analyzing and understanding images, describing them, or extracting structured information from them.

Advantages:

↑ Back to navigation

Embedding {#embedding}

Description: Models that transform text into numerical vectors capturing semantic meaning, essential for search and similarity comparison.

Advantages:

↑ Back to navigation

Image Generation {#image-generation}

Description: Generative models capable of creating images from textual descriptions (text-to-image).

Advantages:

↑ Back to navigation

Audio {#audio}

Description: Models for transcription (speech-to-text), voice synthesis (text-to-speech), and audio analysis.

Advantages:

↑ Back to navigation

Reranker {#reranker}

Description: Specialized models for reordering search results according to their actual relevance to a query, drastically improving RAG system quality.

Advantages:

↑ Back to navigation

4. Telemetry: Observing to Understand and Improve

Without telemetry, your agent is a black box. Measuring, tracing, and analyzing are essential for debugging, optimizing, and monitoring in production.

LangSmith {#langsmith}

Description: LangSmith is a development and monitoring platform for LLM applications, by the creators of LangChain.

external-link www.langchain.com/langsmith

Advantages:

↑ Back to navigation

Langfuse {#langfuse}

Description: Langfuse is an open-source alternative to LangSmith, offering observability and analytics for LLM applications, with self-hosted option.

external-link langfuse.com

Advantages:

↑ Back to navigation

Phoenix {#phoenix}

Description: Phoenix is an open-source ML observability platform specialized in tracing and evaluating LLM and embedding systems.

external-link arize.com/phoenix

Advantages:

↑ Back to navigation

5. Storage: The Memory of Your Agents

An agent without memory cannot learn or maintain context between sessions. Storage choice impacts performance, scalability, and your agent’s capabilities.

PostgreSQL {#postgresql}

Description: PostgreSQL is a robust, proven open-source relational database, with pgvector extension for vector storage.

external-link www.postgresql.org

Advantages:

↑ Back to navigation

pgvector {#pgvector}

Description: pgvector is a PostgreSQL extension allowing efficient storage and search of embedding vectors directly in Postgres.

external-link github.com/pgvector/pgvector

Advantages:

↑ Back to navigation

ChromaDB {#chromadb}

Description: ChromaDB is a lightweight open-source vector database, specifically designed for embeddings and similarity search.

external-link www.trychroma.com

Advantages:

↑ Back to navigation

Pinecone {#pinecone}

Description: Pinecone is a managed vector database, optimized for similarity search at very large scale.

external-link www.pinecone.io

Advantages:

↑ Back to navigation

Supabase {#supabase}

Description: Supabase is an open-source alternative to Firebase, based on PostgreSQL, offering database, auth, storage and auto-generated APIs.

external-link supabase.com

Advantages:

↑ Back to navigation

SQLite {#sqlite}

Description: SQLite is an embedded, lightweight, serverless SQL database, stored in a simple file.

external-link www.sqlite.org

Advantages:

↑ Back to navigation

6. Runtime & Hosting: Where the Agent’s Heart Beats

The choice of inference infrastructure is the ultimate compromise between latency, cost, and sovereignty. It’s often the overlooked point during design, but it determines the final user experience and your monthly bill.

OpenRouter {#openrouter}

OpenRouter

Description: OpenRouter is a unified API giving access to almost all models on the market (OpenAI, Anthropic, but also Groq, Together, and dozens of others).

external-link openrouter.ai

Advantages:

↑ Back to navigation

Local + Open Source Hosting {#local-open-source-hosting}

For those who want to keep total control of their infrastructure and data, these solutions allow you to host your own open-source models.

Together AI {#together-ai}

Description: Together AI is a cloud inference platform specialized in open-source models, with an excellent performance/price ratio.

external-link www.together.ai

Advantages:

↑ Back to navigation

Fireworks AI {#fireworks-ai}

Description: Fireworks AI is a fast and economical inference platform for open-source models, with focus on speed and fine-tuning.

external-link fireworks.ai

Advantages:

↑ Back to navigation

DeepInfra {#deepinfra}

Description: DeepInfra is an economical inference service for open-source models, optimized for large volumes.

external-link deepinfra.com

Advantages:

↑ Back to navigation

Groq {#groq}

Groq

Description: Groq is an ultra-fast inference infrastructure based on LPUs (Language Processing Units), optimized specifically for transformers.

external-link groq.com

Advantages:

↑ Back to navigation

French Providers {#french-providers}

For projects requiring GDPR compliance, data sovereignty, and French support, these French players offer robust solutions.

Scaleway (Generative APIs) {#scaleway}

Description: Scaleway Generative APIs is a robust French offering for deploying open-source models on sovereign infrastructure with transparent pricing.

external-link www.scaleway.com/en/ai

Advantages:

↑ Back to navigation

OVHcloud (AI Endpoints) {#ovhcloud}

Description: OVHcloud AI Endpoints is an inference service from the European cloud leader, ideal for infrastructures already on OVH.

external-link www.ovhcloud.com/en/public-cloud/ai-machine-learning

Advantages:

↑ Back to navigation

Mistral AI (The Platform) {#mistral-ai}

Description: Mistral AI offers direct access to models from the French AI gem, with European hosting options.

external-link mistral.ai

Advantages:

↑ Back to navigation

Big Hyperscalers {#big-hyperscalers}

The choice of “Enterprise” security, deep integration, and professional SLA guarantees for large organizations.

AWS Bedrock {#aws-bedrock}

Description: AWS Bedrock is a managed inference service from Amazon, with focus on security and data isolation.

external-link aws.amazon.com/bedrock

Advantages:

↑ Back to navigation

Google Vertex AI {#vertex-ai}

Description: Google Vertex AI is a complete ML/AI platform from Google, giving access to the Gemini family and advanced ML tools.

external-link cloud.google.com/vertex-ai

Advantages:

↑ Back to navigation

Azure OpenAI {#azure-openai}

Description: Azure OpenAI is an OpenAI service hosted on Azure, with Microsoft enterprise guarantees.

external-link azure.microsoft.com/en-us/products/ai-services/openai-service

Advantages:

↑ Back to navigation

Cloudflare Workers AI {#cloudflare-workers-ai}

Description: Cloudflare Workers AI is an inference service on Cloudflare’s edge, executing models closest to users.

external-link developers.cloudflare.com/workers-ai

Advantages:

↑ Back to navigation

Conclusion: Choose, Assemble, Iterate

Building an agent isn’t about choosing the most hyped technologies, it’s about assembling the right building blocks to solve your specific problem. Each project has its constraints: budget, expected scalability, team skills, data criticality, required development speed.

My advice after hundreds of hours developing agents: start simple. TypeScript + Claude Sonnet 4.5 + PostgreSQL + Langfuse + Groq (or OpenRouter for flexibility) covers 80% of use cases. Then, add complexity only when the need is proven.

Runtime choice is often underestimated: Groq will transform your voice agent’s user experience, Mistral AI + Scaleway will secure your European compliance, AWS Bedrock will reassure your CISO. It’s not a technical detail, it’s a product decision.

And above all, don’t be afraid to write custom code. Frameworks are tools, not dogmas. Sometimes, 200 well-thought lines of TypeScript beat 2000 lines of poorly understood abstractions.

At HeyIntent, this philosophy guides my projects: understand the business need, choose the right building blocks, build robust, measure, iterate. No magic, just engineering.


Need a custom agent for your project? I’m available for technical collaborations. Contact me to discuss.



Next Post
The Onboarding Manual that Turns AI into an Expert: Understanding Agent Skills