The Case for Sovereign and Frugal AI
This article was born from a philosophical conversation about the deep intentions guiding mAIstrow. Rather than a technical article, it is a reflection on the "why": why design a sovereign, frugal, and resilient AI, and how this approach constitutes a response to today's excesses.
A response to the excessive centralization of AI
At the heart of the mAIstrow philosophy lies a deep distrust of AI centralization, as embodied by the dominant approaches of Big Tech.
I worry about seeing large models concentrate power in the hands of a few players, just as search engines did before them. This centralization raises several problems:
Intentional bias and opacity. Current models, often trained to "correct" biases or align their answers with a certain morality, introduce intentional biases that can drift toward forms of censorship or manipulation. We already know that Google intends to introduce advertising into its responses. Today, I understand these corrections. But nothing guarantees they will not spiral out of control tomorrow.
Economic and geopolitical dependence. In an anxiety-inducing global context -- trade wars, tariff hikes, service disruptions -- I want my clients not to be dependent on a sudden cost increase, customs restrictions, or a service shutdown decided thousands of kilometers away. mAIstrow is designed as a sovereign alternative.
My answer to these challenges: a local, transparent, and independent AI that gives control back to users.
Frugality and energy efficiency
I criticize the race toward LLM gigantism, which consumes enormous amounts of energy for often trivial tasks.
I like to say: who cares if a model knows what the temperature was in Nicaragua in 1942. You just need to look it up on the internet. The "encyclopedic knowledge" of a model is an ecological absurdity. Why use an energy-hungry LLM for mathematical operations when a calculator solves them with efficiency several orders of magnitude greater?
On the other hand, a model that can formulate an idea, plan a project, and use tools to compensate for its limitations -- that is relevant.
Small Language Models (SLMs) are the future. They represent an approach that is:
- Ecological, by reducing energy consumption.
- Economical, by enabling SMEs and mid-sized companies to deploy AI solutions without prohibitive costs.
- Relevant, by focusing on specific tasks rather than trying to internalize everything.
Autonomy and transparency: giving power back to the user
My philosophy is deeply driven by a desire for autonomy. I want users to have full control over their AI, their data, and their rules.
Transparency. Users must understand how results are produced and be able to adjust behaviors without retraining an entire model. Ethical rules or anti-bias filters must be explicit and modifiable, unlike current approaches where biases are corrected opaquely.
Autonomy. Users must be able to define their own priorities, choose which SLMs to use, or adjust rules through simple configurations. Every time, the idea is to hand control back to the user. That is part of my philosophy and my drive to make users completely independent.
I do not want a tool that traps. I want a tool that liberates.
The SLM council: a modular and collaborative AI
One of the most original concepts in this philosophy is the "SLM council" -- a council of the wise. Several specialized SLMs work together:
- One SLM to check spelling.
- Another to filter biases or validate ethical rules.
- Yet another to plan or generate ideas.
This council is coordinated by an orchestrator that dynamically decides which SLMs to mobilize depending on the request. The system can even be recursive: a sub-council of SLMs handles the planning itself.
The advantage of this approach, unlike what is done in models today, is that it is transparent. You can adjust it without retraining a model. Every step is explicit and configurable. It is almost a democratic AI architecture, where decisions are made collectively, in an adjustable manner.
Resilience and pragmatism
I want a system capable of surviving disruptions -- disconnections, failures, load spikes -- while remaining simple to deploy. But I am also pragmatic: you have to put food on the table. We start with what is quickly achievable. YAML or JSON for configuration, and we build the interface we need step by step.
The user does not need all of this immediately. What I want is for them to have it eventually. But to get there, the user must buy into the concept and purchase the product. The idea is to be pragmatic, plain and simple.
A vision for an uncertain world
mAIstrow exists in a broader context: a world marked by economic, geopolitical, and environmental uncertainties. The project embodies a vision where AI:
- Protects data sovereignty, by staying local and independent from centralized clouds.
- Reduces the ecological footprint, through SLMs and frugal orchestration.
- Gives power back to users, by offering transparency and control.
I am not trying to compete with Big Tech on their turf. I am proposing a viable alternative, adapted to the realities of SMEs and mid-sized companies. An AI that respects the user, the environment, and economic constraints, while remaining performant and resilient.
This is responsible innovation. And this is what I want to build.