Die Einführung des VIVO-Systems an der HTWD befindet sich derzeit in der Testphase. Daher kann es noch zu anwendungsseitigen Fehlern kommen. Sollten Sie solche Fehler bemerken, können Sie diese gerne >>hier<< melden.
Sollten Sie dieses Fenster schließen, können Sie über die Schaltfläche "Feedback" in der Fußleiste weiterhin Meldungen abgeben.
Vielen Dank für Ihre Unterstützung!
To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models
Despite achieving state-of-the-art results in nearly all Natural Language
Processing applications, fine-tuning Transformer-based language models still
requires a significant amount of labeled data to work. A well known technique
to reduce the amount of human effort in acquiring a labeled dataset is
\textit{Active Learning} (AL): an iterative process in which only the minimal
amount of samples is labeled. AL strategies require access to a quantified
confidence measure of the model predictions. A common choice is the softmax
activation function for the final layer. As the softmax function provides
misleading probabilities, this paper compares eight alternatives on seven
datasets. Our almost paradoxical finding is that most of the methods are too
good at identifying the true most uncertain samples (outliers), and that
labeling therefore exclusively outliers results in worse performance. As a
heuristic we propose to systematically ignore samples, which results in
improvements of various methods compared to the softmax function.