While tech companies play with OpenAI’s API, this startup believes smaller, in-house AI models will win

-Gudstory

While tech companies play with OpenAI’s API, this startup believes smaller, in-house AI models will win -Gudstory

Rate this post

[ad_1]

ZenML wants to be the glue that sticks all open-source AI tools together. This open-source framework lets you create pipelines that will be used by data scientists, machine-learning engineers, and platform engineers to collaborate and build new AI models.

The reason XenML is interesting is that it empowers companies to create their own private models. Of course, companies probably won’t create a GPT 4 competitor. But they can create smaller models that work specifically well for their needs. And this will reduce their dependence on API providers like OpenAI and Anthropic.

“The idea is that, once the first wave of hype dies down among everyone using OpenAI or closed-source APIs, [ZenML] “Will enable people to build their own stack,” Luis Coppi, partner at VC firm Point Nine, told me.

Earlier this year, ZenML expanded its seed round from Point Nine, which also saw participation from existing investor Crane. In total, the Munich, Germany-based startup has raised $6.4 million since its inception.

ZenML founders Adam Probst and Hamza Tahir previously worked together on a company that was building ML pipelines for other companies in a specific industry. “Day by day, we need to build machine learning models and bring machine learning into production,” Adam Probst, CEO of ZenML, told me.

From this work, the duo began designing a modular system that would adapt to different situations, environments, and customers so that they would not have to repeat the same work over and over again – this led to the birth of ZenML.

Also, engineers who are starting with machine learning can get an edge by using this modular system. The ZenML team calls this space MLops – it’s somewhat like DevOps, but applies specifically to ML.

“We are combining open-source tools that are focused on specific stages of the value chain to build machine learning pipelines – everything behind the hyperscalers, so everything behind AWS and Google – and also on-premises solutions ,” Probst said.

The main concept of XenML is pipeline. When you write a pipeline, you can run it locally or deploy it using open-source tools like Airflow or Kubeflow. You can also take advantage of managed cloud services like EC2, Vertex Pipelines, and SageMaker. ZenML also integrates with open-source ML tools like Hugging Face, MLflow, TensorFlow, PyTorch, etc.

“XenML is something that brings everything together into one unified experience – it’s multi-vendor, multi-cloud,” said Hamza Tahir, XenML CTO. It brings connectors, observability, and auditability to ML workflows.

The company first released its framework on GitHub as an open-source tool. The team has collected over 3,000 stars on the coding platform. ZenML has also recently started offering a cloud version with managed servers – triggers for continuous integration and deployment (CI/CD) are coming soon.

Some companies are using XenML for industrial use cases, e-commerce recommendation systems, image recognition in medical environments, etc. Customers include Rivian, Playtika and Leroy Merlin.

Private, industry-specific models

The success of ZenML will depend on how the AI ​​ecosystem continues to evolve. Right now, many companies are adding AI features here and there by querying OpenAI’s API. In this product, you now have a new magic button that can summarize large chunks of text. In that product, you now have pre-written responses to customer support interactions.

“OpenAI will have a future, but we think most of the market should have their own solution” adam probst

But there are some problems with these APIs – they are too sophisticated and too expensive. “OpenAI, or these big language models built behind closed doors, are built for general use cases – not specific use cases. So currently it’s too overtrained and too expensive for specific use cases,” Probst said.

“OpenAI will have a future, but we think most of the market should have their own solution. And that’s why open source is so attractive to them,” he said.

Sam Altman, CEO of OpenAI, also believes that AI models will not be a one-size-fits-all situation. “I think both have an important role. We’re interested in both and there will be a mix of both in the future,” Altman said during a question-and-answer session at Station F earlier this year, responding to a question about smaller, specialized models versus broader models.

The use of AI also has ethical and legal implications. Regulation in real time is still very much evolving, but European legislation in particular may encourage companies to use AI models trained on very specific data sets and in very specific ways.

“Gartner says 75% of enterprises are shifting [proofs of concept] For production in 2024. So the next year or two are probably some of the most important moments in the history of AI, where we’re finally getting into production using a mix of better open-source foundational models on proprietary data,” Tahir told me.

“The importance of MLOps is that we believe 99% of AI use cases will be driven by more specialized, cheaper, smaller models that will be trained in-house,” he said later in the conversation.

Image Credit: ZenML

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *