tokenizers
Fast, Consistent Tokenization of Natural Language Text
Convert natural language text into tokens. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the 'stringi' and 'Rcpp' packages for fast yet correct tokenization in 'UTF-8'.
- Version0.3.0
- R version≥ 3.1.3
- LicenseMIT
- Licensefile LICENSE
- Needs compilation?Yes
- tokenizers citation info
- Last release12/22/2022
Documentation
Team
Lincoln Mullen
Os Keyes
Dmitriy Selivanov
Show author detailsRolesContributorJeffrey Arnold
Kenneth Benoit
Insights
Last 30 days
Last 365 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN
Binaries
Dependencies
- Depends1 package
- Imports3 packages
- Suggests5 packages
- Linking To1 package
- Reverse Imports13 packages
- Reverse Suggests2 packages
- Reverse Enhances1 package