edgemodelr
Local Large Language Model Inference Engine
Enables R users to run large language models locally using 'GGUF' model files and the 'llama.cpp' inference engine. Provides a complete R interface for loading models, generating text completions, and streaming responses in real-time. Supports local inference without requiring cloud APIs or internet connectivity, ensuring complete data privacy and control. Based on the 'llama.cpp' project by Georgi Gerganov (2023) https://github.com/ggml-org/llama.cpp.
- Version0.1.5
- R version≥ 4.0
- LicenseMIT
- LicenseLICENSE
- Needs compilation?Yes
- edgemodelr citation info
- Last releaselast Thursday at 12:00 AM
Team
Pawan Rama Mali
MaintainerShow author detailsGeorgi Gerganov
Show author detailsRolesAuthor, Copyright holderBowen Peng
Show author detailsRolesContributor, Copyright holderpi6am
Show author detailsRolesContributorIvan Yurchenko
Show author detailsRolesContributorThe ggml authors
Show author detailsRolesCopyright holderDirk Eddelbuettel
Jeffrey Quesnelle
Show author detailsRolesContributor, Copyright holder
Insights
Last 30 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Last 365 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN