Skip to main content

Table 2 Input/output dimensions of each layer of proposed model

From: Sentiment analysis from textual data using multiple channels deep learning models

Layers

Input shape

Output shape

Input shape (with sequence length = 300)

(None, 300)

(None, 300)

Embedding

(None, 300)

(None, 300, 300)

Convolutional layer (for each layer out of 5)

(None, 300, 300)

(None, 300,128)

BiLSTM layer I

(None, 300, 128)

(None, 300, 64)

BiLSTM layer II

(None, 300, 64)

(None, 300, 64)

Batch normalization

(None, 300, 64)

(None, 300, 64)

Attention layer

(None, 300, 64)

(None, 64)

Dense layer

(None, 64)

(None, 128)

Concatenation of five dense layers for each channel

(None, 128) * 5

(None, 640)

Dense layer

(None, 640)

(None, 16)

Dropout layer

(None, 16)

(None, 16)

Output layer

(None, 16)

(None, 1)