Appen Launches Solution for Enterprises to Customize Large

KIRKLAND, Wash., March 26, 2024 (GLOBE NEWSWIRE) — Appen Limited (ASX: APX), a leading provider of high-quality data for the AI lifecycle, announced the launch of new platform capabilities that will support enterprises customizing large language models (LLMs).

The solution supports internal teams who are attempting to leverage generative AI within the enterprise. Through a common and consistent process now available in Appen’s AI Data Platform, a user can move through the training of their LLM model(s) from use case to production. The steps include:

  • Model selection: Appen’s platform connects directly to any model, enabling you to evaluate existing models, test new models, and conduct comprehensive benchmarking.
  • Data preparation: High quality data is critical to accurate and trustworthy AI. Appen’s annotation platform enables the preparation of datasets for vectorization and Retrieval-Augmented Generation (RAG).
  • Prompt creation: To effectively validate model performance, a set of custom prompts are required for use cases. Appen’s platform enables you to connect with your internal experts or our global crowd for the creation of custom prompts for model evaluation.
  • Model optimization: Appen’s platform streamlines the process of capturing human feedback for model evaluation. Our platform includes templates for human evaluation, A/B testing, model benchmarking and other custom workflows to inspect performance throughout your RAG process.
  • Safety assurance: Appen’s platform and Quality Raters help ensure that your models are safe to deploy. We have detailed workflows and teams to support red teaming to identify toxicity, brand safety and harm.

Appen’s new capabilities offer enterprises a way to incorporate proprietary data and collaborate with internal subject matter experts to refine LLM performance for enterprise-specific use cases—all within a single platform. Companies can deploy solutions on-premises, in the cloud, or hybrid, and balance LLM accuracy, complexity, and cost-effectiveness.

“Generative AI has created significant opportunities for enterprise innovation,” said Appen CEO, Ryan Kolln. “However, the challenge that enterprises are facing is how to ensure that their LLM enabled applications are accurate and trustworthy. Appen has been at the forefront of human-AI collaboration for over 25 years, and I’m super excited that we can now bring our products and expertise to enterprises looking to build accurate and trustworthy LLM enabled applications.”

For almost three decades, Appen has excelled in the collection and preparation of high-quality, large volumes of data with global reach- exactly the data that is required to train large language models and get accurate, consistent outputs. Appen’s new capabilities will allow enterprises the flexibility to leverage Appen’s crowd-curated data, while tapping into their own proprietary data and human expertise for optimal LLM output.

If you’re interested in learning more about Appen’s new capabilities, please visit our website at or contact an AI Specialist.

About Appen
Appen (ASX:APX) is the global leader in data for the AI Lifecycle with more than 25 years’ experience in data sourcing, annotation and model evaluation. Through our expertise, platform and global crowd, we enable organizations to launch the world’s most innovative artificial intelligence products with speed and at scale. Appen maintains the industry’s most advanced AI-assisted data annotation platform and boasts a global crowd of more than 1 million contributors worldwide, speaking more than 235 languages. Our products and services make Appen a trusted partner to leaders in technology, automotive, finance, retail, healthcare and government. Appen has customers and offices globally.

Appen AI Inc

Source link

The content is by Globe Newswire. Headlines of Today Media is not responsible for the content provided or any links related to this content. Headlines of Today Media is not responsible for the correctness, topicality or the quality of the content.

Back to top button