MEVZU N° TAG / VOL. 019
#architecture
0 blog · 0 news · 13 wiki
Wiki
Mixture of Experts (MoE)
An architecture where only a subset of expert sub-networks activates per token, combining huge capacity with cheaper inference.
- EN
- Mixture of Experts (MoE)
- TR
- Uzmanlar Karışımı (MoE)
Autoregressive Model
A model type that generates the next token step-by-step, conditioned on previous tokens.
- EN
- Autoregressive Model
- TR
- Özyinelemeli Model
Cross-encoder
A transformer architecture that processes the query and a candidate document jointly to score relevance.
- EN
- Cross-encoder
- TR
- Cross-encoder
Orchestrator
The component that plans and coordinates the execution of multiple agents, models, or tools.
- EN
- Orchestrator
- TR
- Orkestratör
Subagent
A secondary agent invoked by a parent agent to handle a specific subtask with its own prompt and tools.
- EN
- Subagent
- TR
- Alt-Ajan
Multi-agent System
A system in which multiple AI agents collaborate, negotiate, or divide labor to accomplish a goal.
- EN
- Multi-agent System
- TR
- Çok-Ajanlı Sistem
Encoder
The Transformer component that turns input into a meaningful internal representation.
- EN
- Encoder
- TR
- Kodlayıcı (Encoder)
Self-Attention
A mechanism where each element in a sequence attends to every other element in the same sequence.
- EN
- Self-Attention
- TR
- Öz-Dikkat
Attention
The mechanism that lets a model decide how much weight to give different parts of its input.
- EN
- Attention
- TR
- Dikkat (Attention)
Decoder
The Transformer component that generates the next token conditioned on what came before.
- EN
- Decoder
- TR
- Çözücü (Decoder)
Cross-Attention
An attention mechanism where one sequence attends to a different sequence, typically connecting encoder and decoder.
- EN
- Cross-Attention
- TR
- Çapraz-Dikkat
Transformer
The attention-based neural network architecture that underpins virtually every modern LLM.
- EN
- Transformer
- TR
- Transformer
Multi-Head Attention
A version of attention where multiple parallel 'heads' learn different relationships at the same time.
- EN
- Multi-head Attention
- TR
- Çok-Başlı Dikkat