Announcing a $2.4M in pre-seed round to enable safe & controlled deployment of generative AI

Broad adoption of generative AI (GenAI) is underway, with spending forecasted to hit $143 billion by 2027. Still, its rapid adoption comes with significant challenges —the main one is managing control and safety of GenAI applications to prevent inappropriate or brand-damaging content. At Alinia AI, we are on a mission to enable safe and controlled deployments of generative AI for enterprises, guided by their policies and business preferences.

 

Problem space

As part of their strategic roadmap, many companies prioritize deploying GenAI applications to innovate and boost productivity gains. However, moving these systems into production presents serious challenges even when internal experimentation yields promising results. Large language models (LLMs) function as complex, hard-to-control black boxes and introduce considerable unpredictability to business applications and processes. This requires additional work and tooling to ensure users’ safety, protect the company’s trust and reputation, and navigate critical business operations.

Alinia AI’s founders, Ari and Carlos, have experienced first-hand the challenges posed by high-scale deployment and massive adoption of AI. Ari led the ML Ethics, Transparency, and Accountability (META)  team and Twitter’s response to a very visible instance of unintended algorithmic bias (image cropping). Carlos led governance processes during the development of Large Language Models at Hugging Face (BigScience and BigCode) and has been considerably active in the drafting discussions of the EU AI Act. Alinia AI is the result of their commitment to fostering responsible development and use of generative AI.

 

Funding 

Alinia AI Secures Pre-Seed Funding: The AI Startup to Invest In for a Safer Generative AI Future.

We are launching out of stealth today with a $2.4M pre-seed round led by Speedinvest and Precursor Ventures with participation from KFund and some of the most renowned figures in the space, including Oriol Vinyals (Google DeepMind VP of Research), Clem Delangue & Thom Wolf (Hugging Face Co-Founders), Xavier Amatriain (Google Core AI Product VP), Tom Preston-Werner (GitHub Co-Founder) and Miguel Martinez (co-founder Signal AI). 

This group of investors brings deep expertise in enterprise SaaS, AI products, and cutting-edge R&D in generative AI. We are incredibly grateful for their support, expertise, and passion for building safer generative AI deployments.

 

Team

Alinia’s founding team comes from top AI companies and research institutions. Besides the founders’ prior roles at IBM, Twitter, and Hugging Face, the team brings expertise from popular user-centric B2B SaaS companies like Typeform and research institutions like the University of Cambridge, Carnegie Mellon University, MIT, and Max Planck Institute.

The team’s unique value lies at the intersection of enterprise product design and development, MLOps, AI research, and governance. We are advised by Xavier Amatriain, VP of Product Core ML/AI at Google; Orestis Papakyriakopoulos, Professor of Societal Computing at TUM, former researcher at MIT and Research Scientist at Sony AI; and Andrew Harrison, former EVP and Managing Director at DataRobot.

 

Introducing Alinia AI Alignment Platform

Alinia AI enables companies to evaluate the performance of their GenAI applications and obtain evidence on how aligned these are to their values, policies, and business preferences. Following the performance assessment, Alinia AI recommends the best next actions to further guide and control LLM output and improve alignment

With Alinia AI Alignment Platform, companies can:

 

  • Ground GenAI applications. We help companies define the criteria upon which their GenAI use cases should be aligned and assessed. This allows them to determine adherence to expected business KPIs, policies, and regulatory requirements.


  • Accurately measure the quality and performance of each use case through previously specified criteria and metrics. This includes running adversarial tests to assess whether the appropriate safety measures are in place.


  • Guide and optimize LLM-powered applications using the recommendations and automated tooling our platform provides based on performance evaluation results. This ensures necessary quality pre-deployment and minimizes risks once the GenAI system is deployed.  


  • Provide evidence of AI alignment performance via intelligible reports for different audiences based on evaluation results (internal quality assurance reports, external AI compliance reports, etc.).

     

The first step towards enterprise alignment is enabling customers to have clear and accurate evidence of how LLM-powered applications behave in different enterprise scenarios for specific tasks and abide by specific rules.

With an initial focus on enabling subject-matter experts to guide and validate the performance and safety of their generative AI applications, Alinia AI’s primary goal is to build an Alignment Platform to create an end-to-end alignment process focused on safety and business requirements –  enabling safe and inclusive use of generative AI across LLM modalities and languages. 

While alignment is a complex long-term problem and an active area of research, our research and industry experience puts us in a unique position to help companies navigate this space more effectively and safely. 

 

Start deploying GenAI safely now.

Through our commitment to innovation and safety, we aim to lead the way in responsible AI development and deployment. We enable companies to innovate in the GenAI space, minimize their business risks, and maximize their benefits to them and society as a whole.

This is just the beginning. We are excited to continue partnering with leading enterprises and scaling our world-class team of AI researchers, ML engineers, and product designers.

If you find this exciting, let’s talk!

Select the single offering you need or select them all