Generative AI and Software Liability
In 2026, the debate about generative AI is no longer technical. It is legal, organizational and strategic.
Automating code production does not automate responsibility. On the contrary: it becomes more visible, more traceable, more demanding.
What is really changing
Three European texts are structuring the movement:
- AI Act: governance, documentation, risk management
- Cyber Resilience Act: security throughout the entire software lifecycle
- New Product Liability Directive: software is explicitly a product
The message is clear: the focus will no longer be only on what broke, but on how it was designed.
What a judge looks at
A judge does not read your code. They read your professional behavior. They ask simple questions:
- Had you identified the risks?
- Did you have written rules?
- Were you able to explain your choices?
- Did you verify what the tool produced?
- Can you prove the timeline?
"Blind trust" in an automated system has historically been judged harshly. AI does not change this principle. It reinforces it.
Governance, not magic
This is not about giving up AI. It is about structuring its use:
- Written specifications
- Explicit governance rules
- Clear traceability
- Effective verifications
The value is no longer in the code. It is in mastering the process.
The real question
The question is not "are we using AI?" Everyone already does.
The real question is: are we able to demonstrate how we use it?
AI does not remove responsibility. It makes it visible.
Full document
This document is in French.