Team:Kent/ModelResults

Results


In our topic model there was no cleaning of the words, meaning words without meaning such as “is”, “and”, “a” etc. were included in the data. Therefore, these meaningless words were considered as topic by the model as they do appear very frequently. Therefore, we manually removed them from the final result as it has no meaning. As our aim was to use topic model as a quantitative tool, maps created to visualise the topic distribution of each subject are presented in the Human Practice section here.



References


  1. "David M. Blei", Cs.princeton.edu, 2016. [Online]. Available: https://www.cs.princeton.edu/~blei/topicmodeling.html. [Accessed: 17- Oct- 2016].
  2. "What is machine learning? - Definition from WhatIs.com", WhatIs.com, 2016. [Online]. Available: http://whatis.techtarget.com/definition/machine-learning. [Accessed: 17- Oct- 2016].
  3. Additive Models, Trees, and Related Methods, 2nd ed. Springer Science+Business Media, 2011.
  4. T. Hofmann, Machine Learning, vol. 42, no. 12, pp. 177-196, 2001.
  5. L. Kaufman and P. Rousseeuw, Finding groups in data. New York: Wiley, 1990.
  6. X. Wei and W. Croft, LDA_Based Document Models for Ad-hoc Retrieval, 1st ed. Amherst: University of Massachusetts Amherst.
  7. "IGI Global", Igi-global.com, 2016. [Online]. Available: http://www.igi global.com/dictionary/hyperparameter/13541]. [Accessed: 17- Oct- 2016].
  8. "Hyperparameters", Brnt.eu, 2016. [Online]. Available: http://www.brnt.eu/phd/node14.html#sec:hyperp. [Accessed: 17- Oct- 2016].
  9. M. Steyvers and T. Griffiths, Probabilistic Topic Models, 1st ed. p. 8.
  10. "Topic Modeling Toolbox", Psiexp.ss.uci.edu, 2016. [Online]. Available: http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm#. [Accessed: 17- Oct- 2016].
  11. "MIT CSAIL Research Abstracts", Publications.csail.mit.edu, 2016. [Online]. Available: http://publications.csail.mit.edu/abstracts/abstracts06/bohsu/bohsu.html. [Accessed: 18- Oct- 2016].
  12. "Example 1 of running HMM-LDA topic model", Psiexp.ss.uci.edu, 2016. [Online]. Available: http://psiexp.ss.uci.edu/research/programs_data/exampleLDAHMM1.html. [Accessed: 17- Oct- 2016].
  13. H. Chang, J. Boyd-graber, C. Wang, S. Gerrish and D. M. Blei, Reading Tea Leaves: How Humans Interpret Topic Models, 1st ed. Vancouver, BC: Neural Information Processing Systems, 2009, p. 3.
  14. "Notebook", Radimrehurek.com, 2016. [Online]. Available: http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html. [Accessed: 17- Oct- 2016].
  15. D. Mimno, The details: training and validating big models on big data, 1st ed. Princeton University, Computer Science, 2012, p. 48.




email twitter instagram facebook