Options
Exploring the Promises of Tranformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Type
journal article
Date Issued
2021-08-26
Author(s)
Abstract (De)
In this article, we explore the potential of transformer-based language models (LMs) to correctly represent normative statements in the legal domain, taking tax law as our use case. In our experiment, we use a variety of LMs as bases for both word- and sentence-based clusterers that are then evaluated on a small, expert-compiled test-set, consisting of real-world samples from tax law research literature that can be clearly assigned to one of four nor-mative theories. The results of the experiment show that clusterers based on sentence-BERT-embeddings deliver the most promising results. Based on this main experiment, we make first attempts at using the best performing models in a bootstrapping loop to build classifiers that map normative claims on one of these four nor-mative theories.
Language
English
HSG Classification
contribution to scientific community
HSG Profile Area
LS - Business Enterprise - Law, Innovation and Risk
Refereed
No
Publisher
arXiv
Publisher place
https://arxiv.org/abs/2108.11215
Number
arXiv:2108.11215
Subject(s)
Eprints ID
264176
File(s)