ENS-PSL
45 rue d'Ulm
75005 Paris
France
Language models (such as BERT, GPT3, ChatGPT... and soon GPT4) have deeply changed the landscape of research in natural language processing in recent years. These models have permitted previously unseen results on many tasks and in many languages. At the same time, their internal mechanisms remain rather opaque, and is the subject of intense research (Bertology). This situation raises many questions.
Can we say that these models 'understand' language? And if so, in what way? To what extent? What is their interest and their benefits for research outside NLP? For creative work? On a practical side, how can we deal with them and/or integrate them into our research, given the computing power required to train them? Have we become dependent on the major (often private) players in the field? What are the limits of these models and their potential dangers?
We will probably not have all the answers to these questions on January 11, but this workshop will at least be an opportunity to think about these models, with various actors in the field, both private and public.
(presentations will be in English)
Registration: free entry, but please indicate your name here
(remote attending should be possible; the link will be sent to participants who have indicated their email address in the file above)