EFFICIENTLY LEARNING AN ENCODER THAT CLASSIFIES TOKEN REPLACEMENTS AND MASKED PERMUTED NETWORK-BASED BIGRU ATTENTION CLASSIFIER FOR ENHANCING SENTIMENT CLASSIFICATION OF SCIENTIFIC TEXT

Efficiently Learning an Encoder That Classifies Token Replacements and Masked Permuted Network-Based BIGRU Attention Classifier for Enhancing Sentiment Classification of Scientific Text

Efficiently Learning an Encoder That Classifies Token Replacements and Masked Permuted Network-Based BIGRU Attention Classifier for Enhancing Sentiment Classification of Scientific Text

Blog Article

Cordial Glasses The exponential growth of scientific literature in digital repositories poses challenges in interpreting complex attitudes within academic texts.Traditional sentiment analysis methods often struggle with nuanced word meanings due to contextual variations.To address this, we propose the Electra-MPNet-based BiGRU attention classifier that extracts the high-level semantic features from citation sentences using the combined strength of Electra and MPNet encoder layers.These features are then combined to extract long-range dependencies through a stacked BiGRU layer.

A linear attention mechanism is imposed to estimate the attention weights and context vector which enables the model to selectively focus on relevant information.The proposed model overcomes inherent constraints and adeptly manages contextual information, enhancing the model’s understanding of sequential data and improving predictive accuracy.We evaluate the proposed model on a dataset of 8736 citation sentences extracted from scientific articles spanning multiple BONE BROTH CHICKEN domains.Our model outperforms state-of-the-art models like LSTM-GRU, Bert-BiLSTM, Bert-LSTM-CNN and MPNet, Electra, BiGRU, and machine learning models in terms of accuracy, precision, recall, F1 and kappa measure which further solidifies its superiority in scientific text sentiment analysis tasks.

Report this page