Vai al contenuto principale della pagina

Large Language Models: Text Classification for NLP using BERT / / with Jonathan Fernandes



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Autore: Fernandes Jonathan Visualizza persona
Titolo: Large Language Models: Text Classification for NLP using BERT / / with Jonathan Fernandes Visualizza cluster
Pubblicazione: Carpenteria, CA, : linkedin.com
Carpenteria, CA, : linkedin.com, , 2022
Descrizione fisica: 1 online resource
Soggetto genere / forma: Instructional films.
Educational films.
Sommario/riassunto: Learn about transformers, the go-to architecture for NLP and computer vision tasks.
Transformers are taking the natural language processing (NLP) world by storm. In this course, instructor Jonathan Fernandes teaches you all about this go-to architecture for NLP and computer vision tasks and must-have skill in your Artificial Intelligence toolkit. Jonathan uses a hands-on approach to show you the basics of working with transformers in NLP and production. He goes over BERT model sizes, bias in BERT, and how BERT was trained. Jonathan explores transfer learning, shows you how to use the BERT model and tokenization, and covers text classification. After thoroughly explaining the transformer model architecture, he finishes up with some additional training runs.
Altri titoli varianti: Large Language Models
Transformers
Titolo autorizzato: Large Language Models: Text Classification for NLP using BERT  Visualizza cluster
Formato: Videoregistrazioni
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910867306703321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui