Description:
RAG pipelines leveraging LLMs Prompt engineering (system/tool prompts, function calling, versioning with evals) Evaluation & observability (ground truth setup, confusion metrics, LLM-as-judge with human review, cost & latency monitoring) Retrieval strategies & prompt patterns (context management, hallucination mitigation) LangChain/LlamaIndex (or equivalent) proficiency Cloud LLM providers (Azure OpenAI, AWS Bedrock, Vertex AI) Workflow orchestration (Airflow, Dagster) Security & privacy (PII ha
Feb 25, 2026;
from:
dice.com