BEGIN:VCALENDAR
VERSION:2.0
PRODID:-// - ECPv6.0.12//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://zurich-nlp.ch
X-WR-CALDESC:Events for 
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20240227T180000
DTEND;TZID=Europe/Paris:20240227T200000
DTSTAMP:20260429T225216
CREATED:20240130T163904Z
LAST-MODIFIED:20240205T191826Z
UID:1447-1709056800-1709064000@zurich-nlp.ch
SUMMARY:ZurichNLP Meetup #8
DESCRIPTION:We are continuing strong in 2024 with Meetup #8 on February 27th! We’re happy to announce the following speakers: \n\nBrian DuSell\, Postdoc @ ETH Zurich\nLeandro von Werra\, Chief Loss Officer @ Hugging Face\n\nRSVP soon as spots are limited\, the last few times we maxed out capacity. Talk details: \nSpeaker #1: Brian DuSell\nTitle: Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns\nSummary: Language contains hierarchical syntactic patterns\, but transformers do not have a mechanism for dealing with hierarchies of arbitrary depth. In this talk I will present my work on Stack Attention\, a kind of self-attention with a latent model of syntax that addresses this limitation and allows transformers to model any context-free language. \nSpeaker #2: Leandro von Werra \nTitle: BigCode: Building LLM’s for Code in an Open and Responsible Way \nHope to see you there!
URL:https://zurich-nlp.ch/event/zurichnlp-meetup-8/
LOCATION:OAT ETH Zürich (14th Floor)\, Andreasstrasse 5 (14th floor)\, Zurich\, 8050
END:VEVENT
END:VCALENDAR