Skip to content
MEVZU N°128ISTANBUL

MEVZU N° TAG / VOL. 088

#llm

4 blog · 1 news · 50 wiki

§01

Blog

04
Google Stitch ve Agentic Engineering ile Full-Stack Mobil Uygulama Geliştirme
4 min read★ Featured
Google Stitch ve Agentic Engineering ile Full-Stack Mobil Uygulama Geliştirme

Google Stitch ve Agentic Engineering ile Full-Stack Mobil Uygulama Geliştirme

Vibe Coding ve Agentic Engineering ile sınırları aşın. Google Stitch ve Wordent Manager kullanarak, Supabase veri tabanlı ve AI chatbot destekli bir veteriner mobil uygulamasını (Wed Care) sıfırdan nasıl geliştirdiğimizi adım adım anlatıyoruz.

§02

News

01
§03

Wiki

50
§01Glossary

MLLM — Multimodal LLM

A large language model that also processes modalities like image, audio, or video.

EN
MLLM (Multimodal LLM)
TR
MLLM — Çok-Modlu LLM
§02Glossary

Foundation Model

A large-scale AI model pretrained on broad data that can be adapted to many downstream tasks.

EN
Foundation Model
TR
Temel Model (Foundation Model)
§03Glossary

Frontier Model

The most capable AI models of their generation, often pushing both capability frontiers and novel risk profiles.

EN
Frontier Model
TR
Sınır Modeli
§04Glossary

Scaling Laws

Empirical relationships describing how model performance changes with parameters, data, and compute.

EN
Scaling Laws
TR
Ölçeklendirme Yasaları
§05Glossary

Open Weight

A release model where only the trained weights are published, not the training code or dataset.

EN
Open Weight
TR
Açık Ağırlıklı
§06Glossary

Open Source LLM

A large language model whose weights, code, or training material are publicly released.

EN
Open Source LLM
TR
Açık Kaynaklı LLM
§07Glossary

Emergent Abilities

Capabilities that appear in models only above a certain scale and are absent in smaller variants.

EN
Emergent Abilities
TR
Ortaya Çıkan Yetenekler
§08Glossary

Large Language Model (LLM)

A large neural network trained on massive text data to understand and generate language.

EN
Large Language Model (LLM)
TR
Büyük Dil Modeli (LLM)
§09Glossary

QLoRA

A LoRA variant combined with quantisation that lets you fine-tune 65B models on a single consumer GPU.

EN
QLoRA
TR
QLoRA
§10Glossary

Mixture of Experts (MoE)

An architecture where only a subset of expert sub-networks activates per token, combining huge capacity with cheaper inference.

EN
Mixture of Experts (MoE)
TR
Uzmanlar Karışımı (MoE)
§11Glossary

Sampling

The general term for how a model picks the next token from its probability distribution.

EN
Sampling
TR
Örnekleme
§12Glossary

Autoregressive Model

A model type that generates the next token step-by-step, conditioned on previous tokens.

EN
Autoregressive Model
TR
Özyinelemeli Model
§13Glossary

RLHF — Reinforcement Learning from Human Feedback

An alignment technique that trains a reward model from human preferences and then optimises the LLM against it.

EN
RLHF (Reinforcement Learning from Human Feedback)
TR
RLHF — İnsan Geri Bildirimiyle Pekiştirmeli Öğrenme
§14Glossary

Tree-of-Thought (ToT)

A reasoning approach that explores multiple branches in parallel instead of a single chain, then picks the best path.

EN
Tree-of-Thought (ToT)
TR
Düşünce Ağacı
§15Glossary

LoRA (Low-Rank Adaptation)

A fine-tuning technique that trains only small low-rank matrices instead of every weight, dramatically cutting memory.

EN
LoRA (Low-Rank Adaptation)
TR
LoRA (Düşük-Mertebeli Adaptasyon)
§16Glossary

DPO — Direct Preference Optimization

An RLHF alternative that directly optimises a model on preference data, skipping the explicit RL loop.

EN
DPO (Direct Preference Optimization)
TR
DPO — Doğrudan Tercih Optimizasyonu
§17Glossary

Reasoning

A model's capacity to work through a problem in multiple steps — now a central axis of LLM competition.

EN
Reasoning
TR
Akıl Yürütme (Reasoning)
§18Glossary

Pre-training

The initial training phase where a model learns general language ability from trillions of tokens of generic data.

EN
Pre-training
TR
Ön Eğitim
§19Glossary

Inference

The process where a trained model takes input and produces output.

EN
Inference
TR
Çıkarım (Inference)
§20Glossary

Masked Language Modeling

A training objective where the model learns to predict tokens that have been masked out of a sentence.

EN
Masked Language Modeling
TR
Maskeli Dil Modelleme
§21Glossary

Speculative Decoding

An inference speedup where a small draft model proposes multiple tokens that the big model then verifies in parallel.

EN
Speculative Decoding
TR
Spekülatif Çözme
§22Glossary

Knowledge Distillation

Training a smaller 'student' model to mimic the behaviour of a larger 'teacher' model.

EN
Knowledge Distillation
TR
Bilgi Damıtma
§23Glossary

Quantization

Representing model weights with lower-precision numbers to save memory and gain speed.

EN
Quantization
TR
Niceleme
§24Glossary

Beam Search

A decoding algorithm that keeps the K most-likely candidate sequences alive in parallel during generation.

EN
Beam Search
TR
Işın Araması (Beam Search)
§25Glossary

RLAIF — RL from AI Feedback

An alignment approach that uses another LLM, instead of human labellers, as the source of preference signals.

EN
RLAIF (RL from AI Feedback)
TR
RLAIF — AI Geri Bildirimiyle Pekiştirmeli Öğrenme
§26Glossary

Post-training

The stage after pre-training that turns a raw model into a helpful, safe, instruction-following assistant.

EN
Post-training
TR
Sonrası-Eğitim
§27Glossary

Training

The process by which a model's weights are updated to learn patterns from data.

EN
Training
TR
Eğitim (Training)
§28Glossary

Fine-tuning

Adapting a pre-trained model to a specific task using smaller, targeted data.

EN
Fine-tuning
TR
İnce Ayar (Fine-tuning)
§29Glossary

Chain-of-Thought (CoT)

Prompting the model to reason step-by-step before producing a final answer.

EN
Chain-of-Thought (CoT)
TR
Düşünce Zinciri
§30Glossary

Pruning

Removing weights with negligible impact to shrink a model and speed it up.

EN
Pruning
TR
Budama (Pruning)
§31Glossary

Confabulation

A more clinically accurate term for LLM 'hallucination' — confidently filling gaps with plausible-sounding fiction.

EN
Confabulation
TR
Konfabülasyon
§32Glossary

Top-P (Nucleus) Sampling

A sampling method that draws from the smallest set of candidates whose cumulative probability exceeds P.

EN
Top-P (Nucleus) Sampling
TR
Top-P (Nucleus) Örnekleme
§33Glossary

Temperature

The sampling parameter that controls how 'creative' or 'deterministic' a model's output is.

EN
Temperature
TR
Sıcaklık (Temperature)
§34Glossary

Tokenization

The process of converting raw text into a sequence of model-readable tokens.

EN
Tokenization
TR
Tokenleştirme
§35Glossary

Transformer

The attention-based neural network architecture that underpins virtually every modern LLM.

EN
Transformer
TR
Transformer
§36Glossary

Encoder

The Transformer component that turns input into a meaningful internal representation.

EN
Encoder
TR
Kodlayıcı (Encoder)
§37Glossary

Hallucination

When an LLM produces fluent, confident-sounding output that simply isn't true.

EN
Hallucination
TR
Halüsinasyon
§38Glossary

Context Length

The total token count consumed in a single model call, used against the model's context-window limit.

EN
Context Length
TR
Bağlam Uzunluğu
§39Glossary

Cross-Attention

An attention mechanism where one sequence attends to a different sequence, typically connecting encoder and decoder.

EN
Cross-Attention
TR
Çapraz-Dikkat
§40Glossary

Decoder

The Transformer component that generates the next token conditioned on what came before.

EN
Decoder
TR
Çözücü (Decoder)
§41Glossary

Attention

The mechanism that lets a model decide how much weight to give different parts of its input.

EN
Attention
TR
Dikkat (Attention)
§42Glossary

Context Window

The maximum number of tokens a language model can process in a single forward pass.

EN
Context Window
TR
Bağlam Penceresi
§43Glossary

Self-Attention

A mechanism where each element in a sequence attends to every other element in the same sequence.

EN
Self-Attention
TR
Öz-Dikkat
§44Glossary

Long Context

Next-generation LLMs that can process hundreds of thousands — sometimes millions — of tokens in a single context.

EN
Long Context
TR
Uzun Bağlam
§45Glossary

Multi-Head Attention

A version of attention where multiple parallel 'heads' learn different relationships at the same time.

EN
Multi-head Attention
TR
Çok-Başlı Dikkat
§46Glossary

Embedding

A way to represent a token or piece of text as a dense numerical vector that encodes its meaning.

EN
Embedding
TR
Gömme (Embedding)
§47Glossary

Top-K Sampling

A sampling strategy that picks the next token from only the K most likely candidates.

EN
Top-K Sampling
TR
Top-K Örnekleme
§48Glossary

Token

The smallest unit a language model processes — a word fragment, character, or symbol.

EN
Token
TR
Token
§49Models

Claude Sonnet 4.5

Anthropic's mid-large model released in late 2025, the industry leader for code and long-form analysis.

EN
Claude Sonnet 4.5
TR
Claude Sonnet 4.5
§50Models

Z.AI GLM 4.6

A large language model family from China-based Z.AI / Zhipu, also released as open-weights.

EN
Z.AI GLM 4.6
TR
Z.AI GLM 4.6