account_tree

Pipelines & Hooks

The core extensibility model in Phoenix

info

Pipeline Overview

priority_high

Core Concept

Pipelines are the single most important architectural concept in Phoenix. They are the primary unit of work, the primary extension point, and the mechanism through which virtually all application logic flows. Understanding pipelines is prerequisite to understanding everything else.

Pipelines provide a clean, composable model for extending and replacing behaviors without conflicts.

Every pipeline defines strict input and output types. A pipeline takes a well-defined input, processes it through a default implementation and any registered hooks, and produces a well-defined output. This type safety ensures that all participants in a pipeline chain are working with compatible data.

Almost all controllers and API endpoints in Phoenix delegate their work to a specific pipeline. When a request arrives, the controller extracts the relevant input, calls the appropriate pipeline, and returns the result. This means that to change the behavior of any endpoint, you simply hook into its pipeline rather than modifying controller code.

Pipelines can call other pipelines, and this pattern is actively encouraged. By composing pipelines, you build extensible chains of behavior where each step can be independently hooked, replaced, or augmented by any plugin in the system.

Pipeline Execution Flow

input Input
filter_alt Pre Hooks
settings Default Logic
link Hook 1
link Hook 2
more_horiz Hook N
output Output

Each hook receives the original input and the previous hook's result, producing a new result passed to the next hook.

linear_scale

Serial Pipelines

Serial pipelines are the default and most common pipeline type. In a serial pipeline, the default logic runs first, and then each registered hook runs one after another in sequence. Each hook receives the original input along with the result from the previous step, allowing it to transform, augment, or completely replace the result before passing it along.

To create a serial pipeline, your class derives from SerialPipeline<TSelf, TInput, TOutput>. The three type parameters are:

Parameter Description
TSelf The pipeline class itself (enables the static generic pattern)
TInput The type of data the pipeline accepts as input
TOutput The type of data the pipeline returns as its result

Creating a Serial Pipeline

Override ExecuteDefaultAsync to provide the pipeline's default behavior. This is the logic that runs when no default hook has replaced it.

code C# — Defining a Serial Pipeline
public class MyCustomPipeline : SerialPipeline<MyCustomPipeline, int, SomeModel>
{
    public override ValueTask<SomeModel> ExecuteDefaultAsync(
        int input, IPipelineContext context, CancellationToken token = default)
    {
        // Your default logic goes here.
        // This runs first, before any hooks.
        var result = new SomeModel { Id = input, Name = "Default" };
        return new ValueTask<SomeModel>(result);
    }
}

Executing a Pipeline

Pipelines expose static methods for execution. You do not need to instantiate the pipeline class yourself — the framework handles resolution and hook ordering.

play_arrow C# — Executing a Pipeline
// Standard execution — throws on failure
var result = await MyCustomPipeline.ExecuteAsync(1, context);

// Safe execution — returns success flag instead of throwing
var (IsSuccess, Result) = await MyCustomPipeline.ExecuteSafelyAsync(1, context);
lightbulb

When to use ExecuteSafelyAsync

Use ExecuteSafelyAsync when a pipeline failure is an expected scenario and you want to handle it gracefully without propagating an exception. The returned tuple gives you an IsSuccess boolean and the Result value, making conditional handling straightforward.

link

Hooks (Declarative)

Hooks are the most common way to customize pipeline behavior. A hook is a class that runs after the pipeline's default logic (or after the previous hook in the chain). Each hook receives two key pieces of data:

  1. The original input that was passed to the pipeline.
  2. The previous result — either the output from the default logic or the output from the hook that ran before this one.

This design means hooks form a chain. Each hook can inspect the previous result, modify it, enrich it, or replace it entirely before passing it along. Multiple plugins can each register their own hook on the same pipeline, and they will all execute in order without conflicting.

code C# — Declarative Hook
public class MyCustomHook : MyCustomPipeline.Hook
{
    public override ValueTask<SomeModel> ExecuteAsync(
        int input,
        SomeModel previousResult,
        IPipelineContext context,
        CancellationToken token = default)
    {
        // Modify or replace the previous result
        previousResult.Name = "Modified by MyCustomHook";
        return new ValueTask<SomeModel>(previousResult);
    }
}

The declarative approach (deriving from Pipeline.Hook) is the recommended pattern. Because hooks are standalone classes, they are easy to find by searching the codebase, easy to unit test in isolation, and easy to understand when reading the code.

Hook ordering follows plugin registration order. If Plugin A registers a hook before Plugin B, Plugin A's hook will run first and Plugin B's hook will receive Plugin A's result as its previousResult.

Hook Chaining

Default
Logic
result
Plugin A
Hook
result
Plugin B
Hook
result
Final
Output
swap_horiz

Default Hooks

A Default Hook completely replaces the pipeline's default implementation. Instead of augmenting or modifying the result after the default logic runs, a Default Hook becomes the new default logic. The original ExecuteDefaultAsync is bypassed entirely.

This is a powerful mechanism for scenarios where the built-in behavior is not suitable and you need to provide an entirely different implementation. For example, if the core platform calculates tax using a simple flat-rate model, a tax plugin could register a Default Hook to replace that with integration to a third-party tax service.

code C# — Default Hook
public class MyReplacementLogic : MyCustomPipeline.DefaultHook
{
    public override ValueTask<SomeModel> ExecuteAsync(
        int input,
        IPipelineContext context,
        CancellationToken token = default)
    {
        // This REPLACES the original ExecuteDefaultAsync entirely.
        // The pipeline's built-in logic will NOT run.
        var result = new SomeModel
        {
            Id = input,
            Name = "Completely replaced by plugin"
        };
        return new ValueTask<SomeModel>(result);
    }
}
warning

When to use Default Hooks vs Regular Hooks

  • Regular Hook: You want to modify, enrich, or extend the result after the default logic runs. The default logic is still valuable.
  • Default Hook: You want to completely replace the default logic with your own implementation. The built-in behavior is not needed at all.

Note: Only one Default Hook can be active per pipeline. If multiple plugins register Default Hooks on the same pipeline, the last one registered wins.

code

Hooks (Imperative)

Instead of creating a standalone hook class, you can register hooks imperatively inside your plugin's OnStartup method. The framework provides two methods for this:

Method Purpose
AppendHook() Adds a hook to the end of the hook chain (runs after all other hooks)
ReplaceDefaultHook() Replaces the default implementation (equivalent to a DefaultHook class)
code C# — Imperative Hook Registration
public class MyPlugin : PhoenixPlugin
{
    public override void OnStartup(IPluginContext context)
    {
        // Append a hook using a lambda
        MyCustomPipeline.AppendHook(
            async (input, previousResult, ctx, token) =>
            {
                previousResult.Name += " (enhanced by plugin)";
                return previousResult;
            });

        // Replace the default implementation
        MyCustomPipeline.ReplaceDefaultHook(
            async (input, ctx, token) =>
            {
                return new SomeModel
                {
                    Id = input,
                    Name = "Fully replaced default"
                };
            });
    }
}
star

Class-Based Hooks Are Preferred

While imperative hooks work perfectly well, class-based (declarative) hooks are preferred in most scenarios. The reasons are practical:

  • Class-based hooks are easier to search for in the codebase (search for class name or "Pipeline.Hook")
  • They are easier to unit test because they have a well-defined class contract
  • They are self-documenting — the class name describes what the hook does

Use imperative hooks for quick prototyping or truly trivial one-liner modifications.

filter_alt

Pre Hooks

Pre Hooks run before the pipeline's default logic executes. They are a gate that can inspect the input and decide one of three things:

  1. Halt — Stop the pipeline entirely and return a custom result. The default logic and all post-hooks are skipped.
  2. Proceed — Allow the pipeline to continue as normal with the original, unchanged input.
  3. ProceedWithInput — Allow the pipeline to continue, but with a modified input value.

Pre Hooks are ideal for validation, authorization checks, input sanitization, or short-circuiting when a cached or pre-computed result is available.

code C# — Pre Hook
public class ValidateInputPreHook : TestPipeline.PreHook
{
    public override ValueTask<PreHookResult<int, SomeModel>> ExecuteAsync(
        int input,
        IPipelineContext context,
        CancellationToken token = default)
    {
        // Option 1: Halt — stop pipeline, return custom result
        if (input < 0)
        {
            return Halt(new SomeModel { Id = -1, Name = "Invalid input" });
        }

        // Option 2: ProceedWithInput — continue with modified input
        if (input == 0)
        {
            return ProceedWithInput(1); // default to 1 instead of 0
        }

        // Option 3: Proceed — continue as normal, no changes
        return Proceed();
    }
}

Pre-Hook Decision Tree

Pre Hook Logic
Halt
block

Return Custom Result

Pipeline skipped entirely

Proceed
check_circle

Execute Pipeline

No changes to input

ProceedWithInput
edit

Modified Input

Then execute pipeline

edit_note

Input Alterations

Input Alterations are a specialized type of Pre Hook that focus specifically on transforming the pipeline's input before it reaches the default logic. Unlike a full Pre Hook, an Input Alteration cannot halt the pipeline — it can only modify the input and let the pipeline continue.

This makes Input Alterations ideal for scenarios like pre-processing, data enrichment, or performing database lookups to supplement the input data before the main pipeline logic runs.

code C# — Input Alteration
public class EnrichOrderInput : DoSomethingPipeline.InputAlteration
{
    public override ValueTask<OrderInput> ExecuteAsync(
        OrderInput input,
        IPipelineContext context,
        CancellationToken token = default)
    {
        // Perform a lookup or add supplemental data
        // before the pipeline's default logic runs
        input.TaxRate = GetCurrentTaxRate(input.Region);
        input.DiscountCode = NormalizeDiscountCode(input.DiscountCode);

        return new ValueTask<OrderInput>(input);
    }
}
compare_arrows

Input Alteration vs Pre Hook

An Input Alteration is simpler than a Pre Hook because it can only modify the input. If you need the ability to halt the pipeline or return a custom result, use a Pre Hook instead. Input Alterations are best when you always want the pipeline to run — you just want to ensure the input is complete and correct first.

call_split

Parallel Pipelines

In a Parallel Pipeline, all hooks run simultaneously rather than sequentially. Each hook operates independently, and the pipeline returns a collection of all results rather than a single chained result.

The classic use case is shipping rate calculation: when a customer views shipping options, you need to query UPS, FedEx, USPS, and possibly other carriers. There is no reason to wait for UPS to respond before asking FedEx — all queries can run in parallel, and the results are collected into a list of available shipping rates.

code C# — Parallel Pipeline
public class GetShippingRatesPipeline
    : ParallelPipeline<GetShippingRatesPipeline, ShippingRequest, ShippingRate>
{
    public override ValueTask<ShippingRate> ExecuteDefaultAsync(
        ShippingRequest input, IPipelineContext context,
        CancellationToken token = default)
    {
        // Default/fallback rate (e.g., flat rate shipping)
        return new ValueTask<ShippingRate>(
            new ShippingRate { Carrier = "Standard", Cost = 9.99m });
    }
}

// Execution returns a collection of all results
IEnumerable<ShippingRate> rates = await
    GetShippingRatesPipeline.ExecuteAsync(request, context);

Serial vs Parallel Comparison

linear_scale

Serial Pipeline

arrow_forward

Sequential execution — each hook runs after the previous one completes

link

Chained results — each hook receives the previous hook's output

output

Single output — returns one final transformed result

check

Default choice when hooks depend on each other's results

1
2
3
check
call_split

Parallel Pipeline

shuffle

Simultaneous execution — all hooks run at the same time

splitscreen

Independent results — each hook produces its own output

view_list

Collection output — returns an IEnumerable of all results

check

Best choice when hooks are independent (e.g., multi-provider queries)

1
2
3
all at once
checklist
lightbulb

When in doubt, use Serial

If you are unsure whether to use a Serial or Parallel pipeline, default to Serial. Serial pipelines are the standard pattern and work correctly in the vast majority of cases. Only use Parallel pipelines when you have a clear use case where hooks are truly independent and would benefit from concurrent execution.

settings_suggest

Pipeline Context

IPipelineContext is the gateway to all state and resources available during pipeline execution. Every pipeline method receives a context parameter, giving hooks and default logic access to authentication state, request information, database connections, caching infrastructure, and dependency injection services.

The context is designed to be a single, consistent entry point so that pipeline code never needs to reach outside the pipeline framework for common resources.

person

Auth State

User ID, Customer ID
Roles & Permissions

http

Request State

Headers, Cookies
Endpoint Path

storage

Database

EF Core DbContext
Query & Write

hub

IPipelineContext

cached

Cache

Distributed Cache
Scope Cache

schedule

Scheduler

Background Jobs
Deferred Work

extension

DI Services

Any Registered
Service via DI

Accessing the Context

Inside a pipeline or hook, the context is always available as a method parameter. Outside of pipelines (for example, in a controller that needs to create a context to call a pipeline), you can obtain an IPipelineContext through dependency injection or the [FromServices] attribute.

code C# — Getting Pipeline Context
// Option 1: Dependency Injection (constructor)
public class MyController : Controller
{
    private readonly IPipelineContext _context;

    public MyController(IPipelineContext context)
    {
        _context = context;
    }
}

// Option 2: [FromServices] attribute
public async Task<IActionResult> GetItem(
    int id,
    [FromServices] IPipelineContext context)
{
    var result = await GetItemPipeline.ExecuteAsync(id, context);
    return Ok(result);
}

// Inside a pipeline/hook, context is always a parameter
public override ValueTask<SomeModel> ExecuteDefaultAsync(
    int input, IPipelineContext context, CancellationToken token)
{
    // Access current user
    var userId = context.Auth.CurrentUserId;

    // Access request headers
    var authHeader = context.Request.Headers["Authorization"];

    // Access database
    var db = context.Database;

    // Access a DI service
    var myService = context.GetService<IMyService>();

    // ...
}
Property / Method Provides Access To
context.Auth Current User ID, Customer ID, role and permission checks
context.Request HTTP headers, cookies, endpoint path, query parameters
context.Database Entity Framework Core DbContext for queries and writes
context.Cache IDistributedCache for manual cache operations
context.Scheduler Background job scheduler for deferred/recurring tasks
context.GetService<T>() Any service registered in the DI container
cached

Pipeline Caching

Phoenix provides multiple caching strategies for pipelines, ranging from simple attribute-based distributed caching to manual cache control and request-scoped memory caching.

Distributed Caching with [DistributedCached]

The simplest way to cache a pipeline's output is to apply the [DistributedCached] attribute to the pipeline class. By default, this caches the result for 10 minutes. The cache key is automatically derived from the pipeline type and input value.

code C# — Distributed Caching
// Default: 10-minute cache
[DistributedCached]
public class GetProductPipeline
    : SerialPipeline<GetProductPipeline, int, Product>
{
    // ...
}

// Custom duration: 30-minute cache
[DistributedCached(CacheDuration = 30)]
public class GetCategoryTreePipeline
    : SerialPipeline<GetCategoryTreePipeline, string, CategoryTree>
{
    // ...
}

// Per-user cache: separate cache entry for each user
[DistributedCached(VaryByUser = true)]
public class GetUserDashboardPipeline
    : SerialPipeline<GetUserDashboardPipeline, int, Dashboard>
{
    // ...
}
Property Default Description
CacheDuration 10 (minutes) How long the cached result remains valid
VaryByUser false If true, each user gets their own cache entry (keyed by User ID)

Manual Caching via IPipelineContext

For more control over cache behavior, you can access the IDistributedCache directly through the pipeline context. This is useful when you need to cache intermediate values, use custom keys, or implement conditional caching logic.

The context also provides a convenient Promiser-style method, ResolveAsync, which checks the cache for a given key and only executes the factory function if the key is not found.

code C# — Manual Cache & Promiser-Style
// Direct IDistributedCache access
var cache = context.Cache;
var cachedValue = await cache.GetStringAsync("my-custom-key");

// Promiser-style: resolve from cache or compute
var product = await context.ResolveAsync(
    "Product_" + productId,
    async () =>
    {
        // This lambda only runs if the cache key is missing
        return await FetchProductFromDatabase(productId);
    });

Scope Caching with [ScopeCached]

[ScopeCached] is a memory-only cache that lives for the duration of the current request or task scope. Unlike distributed caching, the data is never serialized or sent to an external cache store — it exists purely in the application's memory and is discarded when the scope ends.

This is an advanced and relatively rare pattern, useful when a pipeline is called multiple times within a single request and you want to avoid redundant computation without the overhead of distributed cache serialization.

lightbulb

Choosing the Right Cache Strategy

  • [DistributedCached] — Best for data that is expensive to compute and shared across requests/users. Survives application restarts.
  • context.ResolveAsync — Best for manual control over cache keys, conditional logic, or caching intermediate pipeline values.
  • [ScopeCached] — Best for avoiding redundant computation within a single request. Does not persist beyond the request.
search

Finding Pipelines

When you need to find existing pipelines in the codebase, there are a few reliable strategies:

1

Search by Class Name

Search the codebase for classes or file names containing Pipeline. By convention, every pipeline class should include "Pipeline" in its name (e.g., GetProductPipeline, CalculateTaxPipeline).

2

Trace from the Endpoint

If you know which API endpoint or controller you want to customize, open its handler and look for the pipeline it calls. Controllers almost always delegate to a pipeline via PipelineName.ExecuteAsync(...).

3

Search for Base Classes

Search for : SerialPipeline or : ParallelPipeline to find all pipeline definitions in the solution. This is the most comprehensive approach.

auto_awesome

Naming Convention

Every pipeline class in the Phoenix codebase should contain "Pipeline" in its name. This is a project-wide convention that ensures pipelines are always discoverable through simple text search. When creating new pipelines, always follow this convention.