Testing Computational Assessment of Idea Novelty in Crowdsourcing

Kai Wang, Boxiang Dong, Junjie Ma

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

In crowdsourcing ideation websites, companies can easily collect large amount of ideas. Screening through such volume of ideas is very costly and challenging, necessitating automatic approaches. It would be particularly useful to automatically evaluate idea novelty since companies commonly seek novel ideas. Four computational approaches were tested, based on Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), term frequency–inverse document frequency (TF-IDF), and Global Vectors for Word Representation (GloVe), respectively. These approaches were used on three set of ideas and the computed idea novelty scores, along with crowd evaluation, were compared with human expert evaluation. The computational methods do not differ significantly with regard to correlation coefficients with expert ratings, even though TF-IDF-based measure achieved a correlation above 0.40 in two out of the three tasks. Crowd evaluation outperforms all the computational methods. Overall, our results show that the tested computational approaches do not match human judgment well enough to replace it.

Original languageEnglish
JournalCreativity Research Journal
DOIs
StateAccepted/In press - 2023

Fingerprint

Dive into the research topics of 'Testing Computational Assessment of Idea Novelty in Crowdsourcing'. Together they form a unique fingerprint.

Cite this