"Artificial general intelligence" is roughly the notion of a machine that can interpret arbitrary input and respond "correctly", that is, it can do something useful with the information (or recognize it to be erroneous), as a human would.
Artificial intelligence can be understood as an umbrella term for computers behaving in (adaptive) ways they were not explicitly programmed to behave. Machine learning is one particular subfield of (and strategy for) artificial intelligence.
"Artificial superintelligence" refines the idea of "artificial general intelligence" by imagining a machine that can consistently outperform any human at any cognitive task.
"Discriminative artificial intelligence" is a back-formation referring to machine learning techniques that predate their generative counterparts. These include handwriting and facial recognition, license plate readers, anti-fraud systems and so on.
The concept of a foundation model represents a paradigm of machine learning development such that application-specific models are derived from generic ones. The implication is that the latter are much too large and expensive for most entities to create themselves and thus should be licensed from those who can.
A frontier model is a foundation model that is "at the frontier" of technique and capability. Frontier models are so monumentally big and expensive that almost nobody can afford to make one. While this term, like "foundation model", was coined by academics, it has the whiff of another marketing term.
"Generative artificial intelligence", as implemented at this juncture, is essentially two discriminatives in a trenchcoat. Input (typically text, images, audio, video) is translated by one side, and back out by the other.
In machine learning, a hyperparameter is a value that determines the quantity, size (including the size in bytes of the individual elements), and shape of the model's constituent matrices, as well as various tuning functions. The essence of a hyperparameter is that there are far fewer of them than first-order parameters, and they determine the structure and coarse-grained behaviour of the model, as well as impact the cost of both training and execution. Winnowing down configurations of hyperparameters, albeit heavily moderated by the cost of training and subsequent testing, is one way AI can be used to improve itself.
Machine learning refers to the use of statistical methods to program computers. It is typically what people mean by the term "artificial intelligence".
In machine learning, a model refers primarily to the wad of state data generated through training (usually a set of very large matrices), and secondarily to the code that operates over it.
In machine learning, a parameter generally refers to a particular element in a particular matrix that makes up the ensemble of the model. When vendors boast about the number of parameters, they are telling you how expensive their model was to train, and how expensive it is to run.
When the AI uses AI to make better AI and then uses that omg oh no now i'm a paperclip lol