pennylane
Hardware-agnostic quantum ML framework with automatic differentiation — use when training quantum circuits via gradients, building hybrid quantum-classical models, or needing device portability across IBM/Google/Rigetti/IonQ backends.
Train quantum-classical hybrid models with gradient descent
Trigger phrases
Phrases that activate this skill when typed to Claude Code:
quantum machine learningvariational quantum circuitPennyLane QNodeVQE with gradientshybrid quantum model
What it does
pennylane is a Claude Code skill from K-Dense AI’s scientific-agent-skills repo. It turns Claude into a PennyLane expert for quantum machine learning — covering QNode construction (the core quantum-classical interface), parameter-shift rule gradients, device-agnostic circuit execution (IBM, Google, Rigetti, IonQ, or simulator), variational quantum eigensolver (VQE), QAOA, quantum neural network layers (qml.qnn), and hybrid model integration with PyTorch and JAX.
A session produces PennyLane Python code: a QNode wrapping a parameterized quantum circuit, gradient computation via automatic differentiation, and a training loop that optimizes circuit parameters using standard gradient-based optimizers.
When to use it
Reach for it when:
- You need to train a parameterized quantum circuit using gradients and want hardware portability across vendors
- You’re building hybrid quantum-classical models where quantum layers integrate into a PyTorch or JAX neural network
- You want variational algorithms (VQE, QAOA) with gradient-based optimization rather than gradient-free methods
When not to reach for it:
- IBM-specific hardware features, runtime job management, or the Qiskit ecosystem — use
qiskit - Open quantum systems (density matrices, Lindblad evolution) — use
qutip
Install
Copy the SKILL.md from K-Dense AI’s pennylane folder into .claude/skills/pennylane/ in your project. Install via pip install pennylane. Hardware backend plugins are installed separately (pennylane-qiskit, pennylane-cirq, etc.).
Trigger phrases: “quantum machine learning”, “variational quantum circuit”, “PennyLane QNode”, “VQE with gradients”.
What a session looks like
A typical session has three phases:
- Device and circuit design. Select the backend (
default.qubitfor simulation, hardware plugin for real devices). Claude constructs the QNode — the decorated Python function that defines the quantum circuit — with parameterized rotation gates for the variational ansatz. - Gradient-based optimization. Claude sets up the parameter-shift rule or adjoint differentiation for gradient computation and integrates with a PyTorch optimizer (Adam, SGD) or PennyLane’s built-in optimizers for the training loop.
- Execution and results. The circuit runs on the selected device, gradient descent optimizes the circuit parameters, and the energy landscape or classification accuracy is plotted over training steps.
Receipts
Where it works well:
- Quantum chemistry simulations with VQE — PennyLane’s automatic differentiation makes it straightforward to minimize the expectation value of a Hamiltonian with gradient descent rather than gradient-free optimization
- Quantum neural network layers embedded in PyTorch models — the
qml.qnn.TorchLayerwrapper allows quantum layers to participate in standard PyTorch training loops without custom gradient code
Where it backfires:
- Training convergence is highly sensitive to the choice of ansatz and initialization — gradient landscapes for quantum circuits are often barren (exponentially flat) for random initializations
- Hardware execution is slow for long circuits with many parameters due to the number of required circuit evaluations for gradient computation via parameter-shift
Pattern that works: start with default.qubit simulation and verify the gradient computation produces non-zero values before moving to hardware — barren plateau detection on the simulator saves expensive hardware runs on circuits that won’t converge.
Source and attribution
Originally authored by K-Dense Inc.. The canonical SKILL.md lives in the pennylane folder of their public scientific-agent-skills repository.
License: Apache-2.0. Install, adapt, and redistribute with attribution preserved.
This page documents the skill from a practitioner’s perspective. For the formal spec and any updates, defer to the source repo.