Drupal’s default caching is robust, but if you’re building a site that needs to scale, think global e-commerce or high-traffic news, you’ll need to go beyond the basics. Imagine a global online retailer handling millions of visits per day. By layering caches, using targeted invalidation, and preventing cache stampedes, they can see a server load drop by 85% and response times can improve by 92%.
Ready to learn how? Let’s break down these advanced techniques and see how you can apply them to your own Drupal projects.
Chained Cache backends
Layered Caching for speed and resilience
Think of caching like a library: the most popular books are on the front shelf (memory), less popular ones are in the stacks (APCu), and the rarest are in the basement (database). Chained cache backends mimic these data flows through multiple layers, so you get speed and resilience.
Note: Drupal core doesn’t provide a built-in chained cache backend. The following is a conceptual example of how you could implement this pattern as a custom service. For more on cache backends, see the Cache Backend API docs.
use Drupal\Core\Cache\CacheBackendInterface;
class ChainedCacheBackend implements CacheBackendInterface {
protected $backends = [];
public function __construct(array $backends) {
$this->backends = $backends;
}
public function get($cid, $allow_invalid = FALSE) {
foreach ($this->backends as $i => $backend) {
$cached = $backend->get($cid, $allow_invalid);
if ($cached) {
// Populate only the faster layers above this one.
for ($j = 0; $j < $i; $j++) {
$this->backends[$j]->set($cid, $cached->data, $cached->expire, $cached->tags ?? []);
}
return $cached;
}
}
return FALSE;
}
// Implement other required methods: set(), delete(), invalidate(), etc.
}
Real-world use case: E-commerce product catalogue
A store with 100,000 products can use this approach:
- Memory Cache: For top-selling products.
- APCu: For mid-tier products.
- Database: For rarely accessed items.
When a user requests a product:
- Check memory cache.
- If not found, check APCu.
- If still not found, check the database and populate the faster caches for next time.
This keeps frequent visitors happy with lightning-fast responses, while new queries are handled efficiently.
Isolated Cache bins
Preventing Cache interference
Imagine two chefs sharing a pantry: one cooks Thai, the other Italian. If they don’t separate their ingredients, chaos ensues! The same goes for caches-unrelated data should live in separate bins to prevent accidental overwrites.
In Drupal, you usually isolate cache bins by configuring them in settings.php or via service overrides-not by prefixing cache IDS in code.
Configuration example in settings.php
$settings['cache']['bins']['product_search'] = 'cache.backend.redis';
$settings['cache']['bins']['user_profile'] = 'cache.backend.memory';
Why it matters
A news site might use cache.bin.product_search for search results and cache.bin.user_profile for user data. This way, a search cache clear won’t accidentally wipe out user sessions.
Intelligent Cache invalidation
Beyond Simple tag clearing
Invalidation isn’t just about clearing old data-it’s about knowing what to clear and when. For example, if a content editor updates a product, you want to invalidate not just that product’s cache, but also related categories and recommendations.
Learn more: Cache Tags
Example: smart invalidation with Cache tags
class SmartCacheInvalidator {
public function invalidateRelatedContent($entity) {
$tags = [
'entity:' . $entity->getEntityTypeId(),
'entity_bundle:' . $entity->bundle(),
'entity_list',
];
\Drupal::cache()->invalidateTags($tags);
$this->triggerCustomInvalidation($entity);
}
protected function triggerCustomInvalidation($entity) {
if ($entity->hasField('related_content')) {
$related_items = $entity->get('related_content')->referencedEntities();
foreach ($related_items as $related_item) {
\Drupal::cache()->invalidateTags([
'entity:' . $related_item->getEntityTypeId() . ':' . $related_item->id(),
]);
}
}
}
}
Real-world use case: news aggregation
When an article is updated:
- The article’s cache clears (node:123).
- Its category listing refreshes (content_type:article).
- Related articles are invalidated (node:related_456).
This ensures updates propagate instantly-without over-clearing other content.
Cache stampede prevention
Avoiding the rush
A cache stampede is like hundreds of people rushing to rebuild a lost sandcastle at once-total chaos! In Drupal, the Lock API helps by letting only one request rebuild the cache, while others wait.
Example: Using the Lock API
use Drupal\Core\Cache\CacheBackendInterface;
class CacheStampedePreventor {
public function getCachedData($cache_id, callable $callback, array $options = []) {
$lock_id = "cache_rebuild:{$cache_id}";
$cached = \Drupal::cache()->get($cache_id);
if ($cached) {
return $cached->data;
}
$lock = \Drupal::lock();
if ($lock->acquire($lock_id)) {
try {
$data = $callback();
\Drupal::cache()->set(
$cache_id,
$data,
CacheBackendInterface::CACHE_PERMANENT,
$options['tags'] ?? []
);
$lock->release($lock_id);
return $data;
} catch (\Exception $e) {
$lock->release($lock_id);
throw $e;
}
} else {
// Wait for the other process to finish rebuilding.
if (method_exists($lock, 'wait')) {
$lock->wait($lock_id);
} else {
// Fallback: sleep briefly before retrying.
sleep(1);
}
$cached = \Drupal::cache()->get($cache_id);
return $cached ? $cached->data : null;
}
}
}
Note: The wait() method is available in most lock backends in Drupal 9/10, but always check your environment.
Real-world use case: flash sale page
During a flash sale, thousands hit the same product page. The first request rebuilds the cache; everyone else waits for their turn, no server meltdown!
Dynamic Cache contexts
Adapting to complex scenarios
Cache contexts tell Drupal when to vary the cache (e.g., by user role or language). But what if you need to vary by device type, or a combination of factors? That’s where custom cache contexts shine.
Learn more: Cache Contexts
Example: custom Cache context
use Drupal\Core\Cache\CacheContextInterface;
class DynamicCacheContext implements CacheContextInterface {
public function getContext() {
$current_user = \Drupal::currentUser();
return md5(serialize([
'user_roles' => $current_user->getRoles(),
'device_type' => $this->getDeviceType(),
]));
}
protected function getDeviceType() {
$user_agent = \Drupal::request()->headers->get('User-Agent');
return preg_match('/mobile/i', $user_agent) ? 'mobile' : 'desktop';
}
public static function getLabel() {
return t('User roles and device type');
}
}
Real-world use case: responsive web app
A news app serves different layouts to mobile and desktop users. With a dynamic context:
- Mobile users get a lightweight version.
- Desktop users see a full-featured layout.
- Both caches stay independent and efficient.
Performance monitoring
Tracking what matters
Monitoring isn’t just about pretty graphs-it’s about actionable insights. You want to know:
- Which caches are hit most often?
- Which takes the longest to regenerate?
- Where are the bottlenecks?
Learn more: Logger API
Example: Tracking Cache performance
class CachePerformanceTracker {
public function trackCachePerformance(callable $callback) {
$start_time = microtime(true);
$memory_before = memory_get_usage();
$result = $callback();
$execution_time = microtime(true) - $start_time;
\Drupal::logger('cache_performance')->info(
'Execution Time: @time ms, Memory Used: @memory KB',
[
'@time' => round($execution_time * 1000, 2),
'@memory' => round((memory_get_usage() - $memory_before) / 1024, 2),
]
);
return $result;
}
}
Real-world use case: analytics dashboard
A dashboard tracks:
- Cache hit rate by bin
- Average regeneration time
- Invalidations per hour
This helps engineers quickly spot and fix bottlenecks-like a slow block dragging down the homepage.
Implementation strategies
Putting it all together
1. Layered Caching architecture
Use chained backends for critical paths (e.g., product pages), and isolated bins for unrelated systems (e.g., user sessions vs. search results).
2. Contextual Caching
Be specific: avoid broad contexts like url; use url.path or custom contexts for business logic.
3. Granular Invalidation
Target specific entities (node:123) instead of broad tags (content_type:article).
4. Stampede Protection
Apply cache locks to high-traffic pages with frequent updates (e.g., stock tickers, live scores).
5. Continuous Monitoring
Log cache performance and set up alerts for anomalies (like a sudden drop in hit rate).
Series navigation
This article is part of our comprehensive 10-part series on Drupal caching:
- Introduction to Drupal Caching
- Understanding Drupal’s Cache API
- Choosing the Right Cache Backend for Your Drupal Site
- Mastering Page and Block Caching in Drupal
- Optimising Drupal Views and Forms with Caching
- Entity and Render Caching for Drupal Performance
- Building Custom Caching Services in Drupal
- Implementing Custom Cache Bins for Specialised Needs
- Advanced Drupal Cache Techniques (You are here)
- Drupal Caching Best Practices and Performance Monitoring (Coming soon)
What’s next?
Advanced caching techniques help Drupal sites stay responsive under pressure and efficient at scale. By layering cache backends, isolating bins, using precise invalidation, and adapting to context, you create a caching system that mirrors how real users interact with your site.
These methods work best when applied as part of a clear strategy. Each layer supports faster access. Each boundary between cache bins keeps systems focused. Context-aware caching and stampede prevention protect performance without adding complexity. And monitoring ensures every choice is informed by real data.
Together, they form a foundation for stability and long-term maintainability.
In our final blog, we’ll tie everything together with Drupal caching best practices. You’ll learn how to:
- Design caching strategies for long-term maintainability
- Monitor and optimise caching in production
- Troubleshoot common caching pitfalls
Stay tuned for the final chapter in our caching journey!