~
/
t
/
NLP
⌘ '
raccourcis clavier
Search
NLP
Étiquette
seed
ml
publié à
07 févr. 2024
modifié à
19 déc. 2024
durée
1 min de lecture
source
llms.txt
See also:
LLMs
CoT prompting
arxiv:
2201.11903
Vous pourriez aimer ce qui suit
Supervised machine learning
Moral
Vapnik-Chrvonenkis dimension
State space representation
Empirical Estimation of Transfer Functions for First Order Systems
nearest neighbour
steady-state error
Liens retour
AGI
The proposal is that such an AGI would be able to understand or learn any intellectual task that a human being can. It would also be able to learn and improve itself, and possibly be able to do things that humans cannot do.
LLMs
large language models, often implemented as autoregressive transformers models. GPTs and friends Most variants of LLMs are decoder-only (Radford et al., 2019) Have “capabilities” to understand natural language.
Machine learning
Detects pattern within data and use it to make useful prediction. Generally AI \subset ML \subset DL Some main exploration: Transformers Large language models NLP CNN Logistic regression Optimization gradient descent hyperparameter tuning ensemble learning Recommender systems Reinforcement learning Q-learning Policy Gradient Monte-Carlo Tree Search Generative Models GAN VAE Autoencoder sparse autoencoder sparse crosscoders Supervised Q-learning Low-rank adapters Fields mechanistic interpretability Related: linear algebra.
Prompt engineering
A constructive way to form communications with LLMs. As we improve the quality of prompts, we can expect better results from the models. Similar to linguistic, a good prompt is a good form of communication with the system.