CRAN/E | oolong

oolong

Create Validation Tests for Automated Content Analysis

Installation

About

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, ) and word set intrusion (Ying et al. 2021) doi:10.1017/pan.2021.33 tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) doi:10.1080/10584609.2020.1723752.

Citation oolong citation info
gesistsa.github.io/oolong/
github.com/gesistsa/oolong
Bug report File report

Key Metrics

Version 0.6.1
R ≥ 3.5.0
Published 2024-04-15 173 days ago
Needs compilation? no
License LGPL-2.1
License LGPL-3
CRAN checks oolong results

Downloads

Yesterday 5 0%
Last 7 days 68 -21%
Last 30 days 293 -16%
Last 90 days 903 -13%
Last 365 days 3.984 +29%

Maintainer

Maintainer

Chung-hong Chan

Authors

Chung-hong Chan

aut / cre

Marius Sältzer

aut

Material

NEWS
Reference manual
Package source

Vignettes

BTM
Deploy
Overview

macOS

r-prerel

arm64

r-release

arm64

r-oldrel

arm64

r-prerel

x86_64

r-release

x86_64

Windows

r-prerel

x86_64

r-release

x86_64

r-oldrel

x86_64

Old Sources

oolong archive

Depends

R ≥ 3.5.0

Imports

seededlda
purrr
tibble
shiny
digest
R6
quanteda ≥3.0.0
irr
ggplot2
cowplot
cli
stats
utils

Suggests

keyATM ≥ 0.2.2
testthat ≥ 3.0.2
text2vec ≥ 0.6
BTM
dplyr
topicmodels
stm
covr
stringr
knitr
rmarkdown
fs
quanteda.textmodels
shinytest2