From: Sentiment analysis from textual data using multiple channels deep learning models
Ref | Model used | Benefits | Issues | Dataset used |
---|---|---|---|---|
[10] | CNN + attention | Integrate related impact between sentences Counterpart of each sentence is also considered | Limited dataset | WikiQA SICK |
[11] | Tree LSTM | Superior representation of sentence meaning Tree LSTM worked good for shorter sentences | Challenge in depicting the part of structure sentences | SemEval 2014 Stanford Sentiment Treebank |
[12] | Tree CNN–LSTM | Text divided in several regions and affective facts are extracted Task-specific clauses and phrases for constructing structured information | More training time due to attention module | Stanford Sentiment Treebank EmoBank |
[14] | C-LSTM | Best strengths from CNN and LSTM are utilized for model design | Lacks in contextual information extraction | Stanford Sentiment Treebank TREC |
[26] | LSTM | Efficient data preprocessing and partitioning for post-classification LSTM used for feature extraction | Less accuracy | IMDB |
[35] | B-MLCNN | Prepare single document for complete textual review and categorizes into offered sentiments BERT used for representation of feature vector and capturing global features MLCNN performs feature extraction | Struggling to determine the contextual sense of sentences Large set of parameters restrict to find the optimal combination | IMDB Amazon Review |