Skip to content
MEVZU N°128ISTANBUL

MEVZU N° TAG / VOL. 019

#architecture

0 blog · 0 news · 13 wiki

§03

Wiki

13
§01Glossary

Mixture of Experts (MoE)

An architecture where only a subset of expert sub-networks activates per token, combining huge capacity with cheaper inference.

EN
Mixture of Experts (MoE)
TR
Uzmanlar Karışımı (MoE)
§02Glossary

Autoregressive Model

A model type that generates the next token step-by-step, conditioned on previous tokens.

EN
Autoregressive Model
TR
Özyinelemeli Model
§03Glossary

Cross-encoder

A transformer architecture that processes the query and a candidate document jointly to score relevance.

EN
Cross-encoder
TR
Cross-encoder
§04Glossary

Orchestrator

The component that plans and coordinates the execution of multiple agents, models, or tools.

EN
Orchestrator
TR
Orkestratör
§05Glossary

Subagent

A secondary agent invoked by a parent agent to handle a specific subtask with its own prompt and tools.

EN
Subagent
TR
Alt-Ajan
§06Glossary

Multi-agent System

A system in which multiple AI agents collaborate, negotiate, or divide labor to accomplish a goal.

EN
Multi-agent System
TR
Çok-Ajanlı Sistem
§07Glossary

Encoder

The Transformer component that turns input into a meaningful internal representation.

EN
Encoder
TR
Kodlayıcı (Encoder)
§08Glossary

Self-Attention

A mechanism where each element in a sequence attends to every other element in the same sequence.

EN
Self-Attention
TR
Öz-Dikkat
§09Glossary

Attention

The mechanism that lets a model decide how much weight to give different parts of its input.

EN
Attention
TR
Dikkat (Attention)
§10Glossary

Decoder

The Transformer component that generates the next token conditioned on what came before.

EN
Decoder
TR
Çözücü (Decoder)
§11Glossary

Cross-Attention

An attention mechanism where one sequence attends to a different sequence, typically connecting encoder and decoder.

EN
Cross-Attention
TR
Çapraz-Dikkat
§12Glossary

Transformer

The attention-based neural network architecture that underpins virtually every modern LLM.

EN
Transformer
TR
Transformer
§13Glossary

Multi-Head Attention

A version of attention where multiple parallel 'heads' learn different relationships at the same time.

EN
Multi-head Attention
TR
Çok-Başlı Dikkat