Loading...
Loading...
Found 15241 skills
ScientiaCapital
Creates, cleans, and optimizes datasets for LLM fine-tuning, covering formats, synthetic data, and quality assessment.
SyntaxAsSpiral
Configures universal patterns for AI coding agents, enabling context assembly and steering across environments like Claude Code and Codex.
SyntaxAsSpiral
Optimizes AI agent context handling to overcome limitations, reduce cost and latency, and enable long-running operations in language model systems.
SyntaxAsSpiral
Provides patterns for orchestrating multiple AI agents to handle complex tasks exceeding single-agent capabilities or requiring specialized expertise.
SyntaxAsSpiral
Diagnoses and mitigates context degradation in agent systems to maintain performance during large context handling.
SyntaxAsSpiral
Applies covenant principles as design constraints for building AI agents, prompts, and multi-agent architectures.
SyntaxAsSpiral
Designs multi-agent architectures to solve complex tasks by decomposing into subtasks and specializing agents for improved quality and scalability.
SyntaxAsSpiral
Optimizes AI agent performance by compressing conversation context to reduce memory usage and token costs in long sessions.
pkarpovich
Builds multi-step LLM agents with nodes, edges, tools, and execution flows for Continuum system, supporting parallel execution and conditional routing.
Krosebrook
Guides construction of type-safe AI agents and structured LLM applications using Pydantic, including multi-agent systems and orchestration workflows.
pkarpovich
Optimizes and designs effective prompts for Claude 4.x models, including system prompts, agent workflows, and instruction refinement.
pkarpovich
Analyzes Langfuse trace JSON files using jq for debugging AI agent behavior, token usage, and tool call investigation.
chkim-su
Provides best practice patterns for correctly configuring tools used by AI agents, ensuring seamless integration and functionality.
chkim-su
Guides implementation of LLM integrations using U-llm-sdk and claude-only-sdk patterns for AI-powered features and services.
chkim-su
Isolates context for AI query tools (LSP, search, database) via external CLI to avoid excessive token consumption in AI models.
Krosebrook
Enables perfect recall of all past conversations and projects, preventing repetition of mistakes and reinvention of solutions across sessions.
chkim-su
Compares single-skill and multi-skill subagent architectures for effective agent design in AI systems.
johnzfitch
Provides development support for Burn framework, including tensor operations, autodiff, model configuration, and gradient management in Rust-based ML applications.
johnzfitch
Provides utilities for managing ML training workflows including training loops, optimizers, and checkpointing.
johnzfitch
Resolves ONNX model import failures including unsupported operators, opset version mismatches, and dynamic shape issues.
johnzfitch
Routes user queries related to Burn framework (Rust-based deep learning) to appropriate domain skills, enforcing evidence-seeking behavior.
johnzfitch
Diagnoses and resolves common errors in Burn deep learning framework, including tensor shape mismatches and training failures.
johnzfitch
Reviews Burn deep learning code for idiomatic patterns, performance, portability, and best practices. Provides refactoring suggestions and code quality insights.
johnzfitch
Assists with backend implementation, optimization, and extension for ML frameworks including Burn, Candle, and LibTorch, covering GPU acceleration and quantization.