Data and AI projects are not meant to stay on slides.

From strategy to production, I design and deploy robust AI systems integrated into operations.

Data · AI · Product Architecture

Trust signals

Method, governance and delivery discipline, no marketing claims.

Delivery discipline

Every engagement starts with structured framing: objectives, constraints, risks and acceptance criteria. Architecture is documented (ADRs, threat model, standards). Operations include observability, incident management and systematic post-mortems.

Security & governance

Least-privilege access (RBAC), audit trails and data classification at every layer. Integrated LLM safety: prompt evaluation, lightweight red teaming. GDPR compliance by design where applicable.

Artifacts (on request)

On request: Statement of Work and delivery plan, architecture diagram and runbook, risk register and DPIA checklist, FinOps cost model and capacity assumptions.

Documents and examples available on request.

What I do

A value-focused approach from strategy to production delivery.

Data Engineering

Reliable pipelines, data quality, controlled costs. From source to value, cloud or on-prem.

Build robust, observable pipelines

Model data for business analysis and decision-making

Improve data quality and flow governance

Outcomes

Production pipelinesData modelsQuality monitoring

See details

Why

Poorly structured data upstream costs 10x more to fix downstream. The data foundation determines everything that follows.

What it involves

Audit of the existing setup, pipeline redesign, quality and observability implementation. Cloud or on-premise depending on constraints.

What you get

Reliable pipelines in production, controlled costs, teams autonomous on the data stack.

Back

AI Engineering

From POC to AI product: LLMs, RAG, agents, evaluation, MLOps. Focus on robustness and security.

Integrate ML/LLM models into operational workflows

Set up core MLOps practices for reliability

Design AI use cases with clear user impact

Outcomes

Validated use casesOperational AI pipelineDeployment framework

See details

Why

A POC that works in a demo and an AI system in production are two different things. The distance between the two is engineering.

What it involves

Full system design: model, orchestration, evaluation, MLOps, security. From prototype to deployment on your infrastructure.

What you get

An operational AI system, observable, secure and maintainable by your teams.

Back

Product & Architecture

Target architecture, tech choices, scalable patterns. Align tech, business, and constraints (privacy, security).

Shape architecture around real product and business needs

Drive technical choices with maintainability in mind

Structure product and data building blocks coherently

Outcomes

Architecture blueprintTechnical decisionsDesign standards

See details

Why

Most AI failures don't come from the model. They come from an architecture that doesn't hold under load, or a product with no real user grounding.

What it involves

Target architecture definition, technical trade-offs, business alignment and regulatory constraints (GDPR, security).

What you get

An architecture that holds in production, documented decisions, a product that matches a real need.

Back

Strategy & Delivery

Framing, roadmap, MVP, iterative delivery. Pragmatic leadership to production and beyond.

Prioritize data and AI initiatives around business constraints

Build a roadmap focused on measurable impact

Track decisions and execution with clear ownership

Outcomes

Prioritized roadmapDelivery planSteering KPIs

See details

Why

An AI roadmap without measurable success criteria is not a roadmap. It's a budget that leaves without return.

What it involves

Priority framing, KPI definition, iterative steering to production. No slide without a concrete deliverable attached.

What you get

Decisions made, delivery that moves forward, measurable results at each step.

Back

Projects & Prototypes

Deliveries and architectures in progress.

Protected

Système RAG Entreprise

Production-ready RAG system to centralize and query internal documentation, with role-based access control, observability and GDPR compliance.

PythonLangChainAWSChromaOpenAIPrometheus

Detailed case study available upon direct contact.

Protected project. Unlock on request

Protected

Data Platform Lakehouse

Migration from a legacy warehouse to a modern Lakehouse architecture, cutting infrastructure costs by 40% and tripling analytical data freshness.

SnowflakedbtAirflowPythonGrafanaGreat Expectations

Detailed case study available upon direct contact.

Protected project. Unlock on request

Protected

SkillOS

Next-gen competency-driven LMS: skills graph, continuous mastery scoring, adaptive remediation, and institution dashboards for quality and employability.

Next.jsPostgreSQLPythonLangChain

Click to see details ↓

Why

Classic LMS measure attendance, not mastery. Institutions fly blind: no reliable signal on what students can actually do.

How

Competency graph + adaptive assessment engine. Interactive sessions, continuous scoring, targeted remediation. Open API to integrate with existing infrastructure.

Target outcome

Skill-by-skill visible progress. Success rate +20-35%. Native monetization via validated prerequisites and certified pathways.

← Back

Protected

Crowd Flow Optimizer

Real-time prediction and orchestration engine to optimize crowd flows and reduce congestion points during mega-events.

KafkaFlinkPythonKubernetes

Click to see details ↓

Why

Crowd accidents are predictable. Current tools react after the fact: cameras and human agents don't scale to 100,000 people.

How

Kafka + Flink streaming pipeline ingesting IoT sensors, cameras and ticketing data. Predictive density and hotspot model. Real-time operator dashboard + automated alerts.

Target outcome

40% reduction in congestion incidents. Operator response time cut by 3x. Strengthened event regulatory compliance.

← Back

Protected

M&A Due Diligence Agent

Full AI agent for M&A due diligence: document analysis, compliance scoring, entity extraction and sector synthesis in reduced time.

PythonspaCyNeo4jLangChain

Click to see details ↓

Why

Classic due diligence takes 6-12 weeks, mobilizes 10+ lawyers and analysts, and still misses critical signals buried in 50,000 pages of documents.

How

OCR pipeline + entity extraction (spaCy/NER) + Knowledge Graph (Neo4j) to map entity relationships. LLM agent for synthesis, risk scoring and anomaly detection. Output: structured report with source citations.

Target outcome

Due diligence reduced from 6 weeks to 5 days. 100% document coverage vs 60% manually. Critical risks identified with full traceability.

← Back

Protected

Smart City Command Center

Real-time urban supervision platform centralizing IoT, mobility, energy and security to operate a smart city from a unified dashboard.

KafkaFlinkPythonGrafana

Click to see details ↓

Why

Cities accumulate sensors without ever connecting them. The result: data silos, reactive decisions, and incidents detected too late.

How

Multi-source real-time pipeline (IoT, traffic, energy, video) on Kafka + Flink. Unified semantic layer. Operator dashboard with predictive alerts and scenario simulation.

Target outcome

Unified real-time city view. 35% reduction in incident response time. Measurable energy optimization from month one.

← Back

A project. A constraint. An outcome to define together.

I take engagements where impact is measurable and decision-makers are aligned. If that is your situation, let's talk.