Software:Llama.cpp

From HandWiki
llama.cpp
Original author(s)Georgi Gerganov
Developer(s)Georgi Gerganov and community
Initial releaseAlpha ( b1083 ) / August 26, 2023; 10 months ago (2023-08-26)
Written inC++
LicenseMIT License
Websitegithub.com/ggerganov/llama.cpp

Llama.cpp is an open source software library that performs inference on various Large Language Models such as LLaMA.[1] It is written in C++ and is generally smaller in size and complexity than most existing inference frameworks like TensorFlow. It currently has 55 thousand stars on GitHub.[2]

History

Llama.cpp began development by Georgi Gerganov to implement LLaMA in pure C++ with no dependencies. The advantage of this method was that it could run on more hardware compared to other inference libraries that depended on hardware dependent closed source libraries like CUDA. Before Lamma.cpp, Gerganov worked on a similar library called whisper.cpp[3] which implemented OpenAI's "whisper" speech to text model. Lamma.cpp gained traction from users who did not have specialized hardware as it could run on just a CPU including on Android devices.[4] In March 2023 Gerganov started a company around llama.cpp called ggml.ai.[5]

Architecture

Llama.cpp initially could only run on CPUs but now can run on GPUs using multiple different back-ends including Vulkan and SYCL. These back-ends make up the GGML tensor library which is used by the front-end model-specific llama.cpp code and is also used by other projects such as whisper.cpp.[6] Llama.cpp has it's own model format called GGUF (previously referred to as GMML format).[7] It is required to convert models from other formats to GGUF, and sometimes not all tensor functions required by a given model are supported by GGML/GGUF. Llama.cpp in general follows the KISS principle in order to make it as small and easy to use a dependency as possible.

References