Code contributors Contributors
97
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc
Created
Aug. 29, 2016
Updated
April 23, 2024
License
apache-2.0
Github repo
Primary Language, based on Github DataLanguage
Python
Issues
1034