added two more experiments with features of emotion lexicons

parent eb0c57c1
emoji emotion
😒 anger
😠 anger
😡 anger
😣 anger
😤 anger
😾 anger
👿 anger
👊 anger
🤮 disgust
😨 fear
😰 fear
😱 fear
😧 fear
🙀 fear
🆘 fear
😁 joy
😂 joy
😃 joy
😄 joy
😅 joy
😆 joy
😉 joy
😊 joy
😋 joy
😌 joy
😍 joy
😏 joy
😘 joy
😚 joy
😬 joy
😜 joy
😝 joy
🙃 joy
😸 joy
😹 joy
😺 joy
😻 joy
😽 joy
😎 joy
🙆 joy
🙋 joy
🙌 joy
❤ joy
🧡 joy
💛 joy
💚 joy
💙 joy
💜 joy
❣ joy
💕 joy
💑 joy
💏 joy
💋 joy
♥ joy
🙎 sad
😓 sad
😔 sad
😖 sad
😞 sad
😢 sad
😥 sad
😩 sad
😪 sad
😫 sad
😭 sad
😿 sad
🙍 sad
🤒 sad
💔 sad
😲 surprise
😳 surprise
😵 surprise
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
...@@ -53,3 +53,33 @@ Macro-Precision: 0.6040684610078678 ...@@ -53,3 +53,33 @@ Macro-Precision: 0.6040684610078678
Macro-Recall: 0.6040370418815006 Macro-Recall: 0.6040370418815006
Macro-F1: 0.6024329096077434 Macro-F1: 0.6024329096077434
Accuracy: 0.604003753518924 Accuracy: 0.604003753518924
Experiment 3:
vocab_embeddings = 500000
max_lenght_tweet = 30
Layers:
e = Embedding(feature_size, EMBEDDING_DIM, input_length=max_len_input, weights=[embedding_matrix], trainable=False)
model.add(e)
model.add(LSTM(128, return_sequences=True))
model.add(Dense(64, activation='relu', kernel_initializer=glorot_uniform(seed=RANDOM_SEED), activity_regularizer=regularizers.l2(0.0001)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(32, activation='relu', kernel_initializer=glorot_uniform(seed=RANDOM_SEED), activity_regularizer=regularizers.l2(0.0001)))
model.add(Dropout(0.5))
model.add(Dense(len(CLASSES), activation='softmax'))
Accuracy trainning: 84.375061
Results evaluation:
*** Results RNN_LSTM ***
Macro-Precision: 0.552099234158447
Macro-Recall: 0.5516921635572564
Macro-F1: 0.550307950647363
Accuracy: 0.5516630174121573
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment