Search Results

SORT BY: PREVIOUS / NEXT
Author:Iyengar, Rithika 

Working Paper
LLM on a Budget: Active Knowledge Distillation for Efficient Classification of Large Text Corpora

Large Language Models (LLMs) are highly accurate in classification tasks, however, substantial computational and financial costs hinder their large-scale deployment in dynamic environments. Knowledge Distillation (KD) where a LLM ""teacher"" trains a smaller and more efficient ""student"" model, offers a promising solution to this problem. However, the distillation process itself often remains costly for large datasets, since it requires the teacher to label a vast number of samples while incurring significant token consumption. To alleviate this challenge, in this work we explore the ...
Finance and Economics Discussion Series , Paper 2025-108

FILTER BY Content Type

FILTER BY Author

Crane, Leland D. 1 items

Ge, Xiaoyu 1 items

Haberkorn, Flora 1 items

Lee, Seung Jung 1 items

Luccioli, Viviana 1 items

show more (3)

FILTER BY Jel Classification

C38 1 items

C45 1 items

C55 1 items

FILTER BY Keywords

PREVIOUS / NEXT