LydianAI Open-source tooling

Open-source tools built from real problems, shared for the same reason.

Both LydianAI projects started the same way — looking for a tool that fit the actual situation, not finding it, and building it. The work is open-source because the people who most need it are the same people who can make it better.

For safety-critical tooling, the ability to read, audit, and change what the tool does is not optional. A closed compliance system that produces artifacts you can't fully inspect is itself a problem. Open source means you can see exactly how the evidence is produced — and change it if your process requires something different.

No one person has every GPU variant on their desk. The federated ML infrastructure gets better as people with different hardware setups find it, run it, and contribute back the edge cases they hit. Pascal quirks, unusual CUDA versions, edge-case networking — these surface through use, not through anticipation.

Automotive software teams working on compliance face the same structural problem everywhere: tracking evidence manually does not scale. Putting the assurance-as-code tool in the open means teams can adapt it to their own process and requirements, rather than conforming to a vendor's assumptions.

Two projects. Two problems. One approach.

Federated distributed training

Train across machines you already own — modern GPUs, old GPUs, even CPU-only. Each machine trains on its own data and sends back an update. No shared GPU pool required.

Federated ML →

Assurance-as-Code

Managing the compliance evidence for an automotive SW System — requirements, test cases, hazard analyses, and release records — so it stays consistent as the software changes. Built for UNECE R155, R156, and ISO 29119.

Assurance-as-Code →

Both are self-hosted, open-source, and designed to run on real hardware with real constraints.