Decoding Word Sense with Graphical Embeddings

Word Sense Disambiguation

  • This project was sponsored by the Discover Grant of Summer 2019 at the University of Rochester.

  • I was supervised by Prof. Aaron Steven White from the Formal And CompuTational Semantics lab (FACTS.lab)

  • Implemented three deep graph encoders over WordNet for word sense embeddings bounded by hypernyms-hyponyms and meronyms-holonyms: a child-sum graph bi-LSTM, a matrix decomposition method with graph kernels, and a GCN.

  • Trained a sequence-to-sequence model with Von Mises-Fisher loss for word sense disambiguation (WSD); improved the model to work on unknown words by incorporating the Supersense relations from WordNet. The best model achieved a 75% accuracy on the Universal Decompositional Semantics Word Sense dataset.

  • Result Visualization

    • Graph bi-LSTM over the WordNet (depth = 3)




The code can be found here.