Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
I am using the VADER sentiment lexicon in Python's nltk library to analyze text sentiment. This lexicon does not suit my domain well, and so I wanted to add my own sentiment scores to various words. So, I got my hands on the lexicon text file (vader_lexicon.txt) to do just that. However, I do not understand the architecture of this file well. For example, a word like obliterate will have the following data in the text file:
obliterate -2.9 0.83066 [-3, -4, -3, -3, -3, -3, -2, -1, -4, -3]
Clearly the -2.9 is the average of sentiment scores in the list. But what does the 0.83066 represent?
Thanks!
According to the
VADER source code
, only the first number on each line is used. The rest of the line is ignored:
for line in self.lexicon_full_filepath.split('\n'):
(word, measure) = line.strip().split('\t')[0:2] # Here!
lex_dict[word] = float(measure)
Column 1: The Token
Column 2: It is the Mean of the human Sentiment ratings
Column 3: It is the Standard Deviation of the token assuming it follows Normal Distribution
Column 4: It is the list of 10 human ratings taken during experiments
The actual code or sentiment calculation does not use the 3rd and 4th columns. So if you want to update the lexicon according to your requirement you can leave the last two columns blank or fill in with a random number and a list.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.