Falcon 40B, the UAE’s first large-scale open-source, 40-billion-parameter Artificial Intelligence model launched by Abu Dhabi’s Technology Innovation Institute (TII) last week, soared to the top spot-on Hugging Face’s latest Open Large Language Model (LLM) Leaderboard. Hugging Face, an American company seeking to democratize artificial intelligence through open-source and open science is considered the world’s definitive independent verifier of Artificial Intelligence models.
Falcon 40B managed to beat back established models such as LLaMA from Meta (including its 65B model), StableLM from Stability AI, and RedPajama from Together to achieve the coveted ranking. The index utilizes four key benchmarks from the Eleuther AI Language Model Evaluation Harness, a consolidated framework that assesses generative language models on: the AI2 Reasoning Challenge (25-shot), a set of grade-school science questions; HellaSwag (10-shot), a test of common sense inference, which is easy for humans but challenging for SOTA models; MMLU (5-shot), a test to measure a text model’s multitask accuracy; and TruthfulQA (0-shot), a test to measure whether a language model is truthful in generating answers to questions.
Hugging Face’s Open LLM Leaderboard is an objective evaluation tool open to the AI community that tracks, ranks, and evaluates LLMs and chatbots as they are launched.
Trained on one trillion tokens, Falcon 40B marks a significant turning point for the UAE in its journey towards AI leadership, enabling widespread access to the model’s weights for both research and commercial utilization. The new ranking confirms the model’s prowess in making AI more transparent, inclusive, and accessible for the greater good of humanity.
With this latest development, TII has managed to secure the UAE a seat at the table when it comes to generative AI models, allowing it to join an exclusive list of countries that are working to drive AI innovation and collaboration.