Case Study

HSE24: When Personalization Actually Works

How we turned Germany's largest home shopping network into a personalization case study—28% conversion improvement by understanding that the AI is the easy part.

Client
HSE24
Year
Service
E-Commerce & AI Development

HSE24 is Germany's largest home shopping network—the kind of company where a 1% improvement in conversion translates to millions in revenue. When they came to us, their digital platform was functional but generic. Every customer saw the same recommendations. Every email had the same products. The personalization gap was costing them.

The executive pitch was simple: Amazon-level personalization for their platform. The reality was more nuanced—and more interesting.

Home shopping networks face unique challenges in the digital transition. The TV broadcast drives massive, synchronized traffic spikes—hundreds of thousands of viewers seeing the same product at the same moment. The online platform needs to capture that interest, extend the relationship beyond the broadcast, and compete with pure e-commerce players who've never had to think about TV at all.

HSE24's customer base was sophisticated in some ways and less so in others. They knew what they wanted and were loyal to brands and hosts they trusted. But they expected the digital experience to match the personal touch of TV shopping—recommendations that felt considered, not algorithmic. Generic "customers also bought" suggestions felt cold compared to the host who knew their preferences.

The Problem Beneath the Problem

Most companies that want "AI personalization" have a data problem they don't know about. HSE24 was no exception.

Their customer data existed in silos. Purchase history lived in one system, browsing behavior in another, email engagement in a third. The data wasn't wrong, but it wasn't unified. Building recommendations on fragmented data produces fragmented results—the system might know what you bought but not what you browsed, leading to recommendations that feel tone-deaf.

The TV viewing data added another layer of complexity. When did customers tune in? Which hosts did they follow? Which product categories appeared during their viewing? This information existed but wasn't connected to digital behavior. A customer who watched every jewelry segment on TV but never clicked jewelry products online wasn't being recognized as a jewelry enthusiast—the systems didn't talk to each other.

The first month of the project looked nothing like AI development. It was data engineering: building pipelines to unify customer data into a coherent view. Creating event streams that captured behavior in real-time. Establishing the foundation that would make personalization actually work.

We built a customer data platform that unified signals from every touchpoint. Browsing behavior flowed in via JavaScript tracking. Purchase data came from the order management system. Email engagement synced from the marketing platform. TV viewing patterns integrated from broadcast analytics. Every customer had a unified profile that updated in real-time.

This isn't the exciting part of the story, but it's where most personalization projects fail. Teams rush to build models on data that isn't ready, then wonder why their recommendations feel random. We spent weeks on data quality, deduplication, identity resolution—the inglorious work that determines whether the exciting part works.

What Personalization Actually Means

"Personalization" is a vague term that means different things to different stakeholders. We had to get specific.

For HSE24, personalization meant three concrete things:

Recommendations that understand context. A customer browsing kitchen gadgets at 9pm on Tuesday has different intent than the same customer browsing Saturday morning. The evening browser is likely researching; the weekend browser might be ready to buy. Recommendations needed to reflect this—showing inspiring products in the evening, bestsellers with social proof on weekends.

The time-based personalization went deeper than just day of week. We learned that customers behaved differently during TV broadcasts versus non-broadcast hours. During shows, they wanted to see what was currently on air plus related products. After shows, they wanted to explore at their own pace. The recommendation engine adjusted its approach based on broadcast schedule.

Email that doesn't feel like spam. HSE24 sent millions of emails weekly. Open rates were fine; click-through rates were poor. The emails showed the same products to everyone, regardless of past behavior or stated preferences. We built a system that personalized not just which products to show, but when to send, how many items to include, and what style of messaging to use.

Email personalization extended beyond product selection. Some customers responded to urgency messaging ("only 3 left!"); others found it pushy. Some clicked on lifestyle imagery; others preferred product-focused layouts. The system learned preferences for each customer and adapted not just content but presentation. The same product appeared differently to different customers based on what resonated with their history.

Search that knows what you mean. Type "comfortable shoes" into most e-commerce search, and you get results that match the word "shoes" while ignoring "comfortable." We implemented semantic search that understood intent, plus personalized reranking that weighted results toward styles and brands each customer had shown preference for.

The search improvements went beyond basic semantics. We built synonym handling that understood German variations and dialectal differences. We implemented query understanding that could interpret "something for mom's birthday" and surface appropriate gift items. Typo tolerance handled the misspellings that are common on mobile keyboards. The search experience became genuinely helpful rather than frustrating.

The Technical Reality

The recommendation engine combined three approaches, each serving different purposes.

Collaborative filtering identified patterns across customers: people who bought X also bought Y. This works well for popular products with lots of purchase data but fails for new items or niche products with sparse data.

We implemented matrix factorization techniques that could handle HSE24's catalog size efficiently. The model learned latent factors for both customers and products, enabling predictions even for combinations never seen in training data. Regular retraining ensured the model stayed current as customer preferences and product catalog evolved.

Content-based filtering matched product attributes to customer preferences. If a customer bought several blue dresses, the system learned to surface blue clothing. This handles new products well but can create filter bubbles where customers only see variations of what they've already bought.

The content-based model used product embeddings derived from descriptions, images, and categorical attributes. Similar products clustered in embedding space, enabling recommendations for new items based on their similarity to products customers had engaged with. The visual features were particularly valuable—customers often responded to aesthetic similarities that weren't captured in text descriptions.

The hybrid approach combined both, with a diversity mechanism that intentionally introduced serendipity. The best recommendations aren't just accurate—they're interesting. A system that only shows what customers already like becomes boring; a system that surprises occasionally drives discovery and higher engagement.

The diversity mechanism was carefully tuned. Too little diversity and recommendations became stale—customers would see the same products repeatedly. Too much diversity and relevance suffered—customers would see random-seeming suggestions. We calibrated the balance based on customer engagement patterns, with some customers preferring more exploration and others preferring more exploitation of known preferences.

The machine learning pipeline ran on Python with TensorFlow for the deep learning components and scikit-learn for classical models. Models retrained nightly on fresh data, with A/B testing infrastructure that let us validate every change before full rollout.

The MLOps infrastructure was as important as the models themselves. Automated pipelines handled data ingestion, feature engineering, model training, and deployment. Monitoring detected drift in model performance. Rollback mechanisms ensured we could revert quickly if a new model underperformed. The system was designed for continuous improvement, not one-time deployment.

ReactNode.jsPythonTensorFlowRedisPostgreSQLElasticsearchAWS

Performance Was the Prerequisite

Before any personalization could matter, the site had to be fast. Their existing platform took 4+ seconds to load on mobile. At that speed, personalization is irrelevant—customers leave before they see recommendations.

We cut page load time by 45%. Server-side rendering for instant initial paint. Aggressive image optimization with next-gen formats and responsive sizing. Redis caching for frequently accessed data. CDN deployment for static assets. Database query optimization that reduced some critical queries from seconds to milliseconds.

The image optimization was particularly impactful. E-commerce sites are image-heavy, and HSE24 had thousands of product images in multiple sizes. We implemented WebP format with JPEG fallbacks, responsive sizing that served appropriate resolutions for each device, and lazy loading that deferred below-the-fold images. Image-related data transfer dropped by over 60%.

Caching strategy required careful thought. Product data changes frequently—prices, inventory levels, promotional flags. We built tiered caching that balanced freshness with performance. Rarely-changing data like product descriptions cached for hours. Frequently-changing data like inventory levels cached briefly or fetched live. The cache invalidation was automated based on data source changes.

This wasn't glamorous work, but it was essential. Every second of load time costs conversion. The performance improvements alone delivered measurable revenue impact before any AI touched production. We could see in analytics: faster pages had higher conversion rates, and the effect was consistent across customer segments.

The Privacy Reality

European e-commerce means GDPR. Personalization means processing personal data. These constraints aren't obstacles—they're design requirements.

Every personalization feature was built with consent management integrated from the start. Customers could see what data was used for recommendations and adjust preferences. Data retention policies were automated, not manual. The right to be forgotten was a button click, not a support ticket.

The technical implementation required careful architecture. Customer data flowed through systems that respected consent boundaries. If a customer revoked consent for personalization, the system degraded gracefully to non-personalized experiences while maintaining core functionality. Data deletion propagated through all systems, including ML model training pipelines—we didn't want deleted customers influencing future recommendations.

Some companies treat privacy as a checkbox exercise. We treated it as product design. Customers who trust that their data is handled responsibly engage more deeply. Transparency about personalization—"We're showing you this because you liked similar products"—actually improves click-through rates.

We built a preference center where customers could control their experience. Not just opt-in/opt-out, but granular controls: use my purchase history but not my browsing behavior, personalize product recommendations but not email frequency. The customers who engaged with these controls became the most engaged overall—taking control over their experience made them feel respected, not surveilled.

The Results

+28%
Conversion Rate
+15%
Average Order Value
-45%
Page Load Time
99.95%
Platform Uptime

The 28% conversion improvement came from the combination of everything: faster site, better recommendations, personalized search, and targeted email. No single change delivered that lift—it was the compounding effect of improvements across the customer journey.

We tracked attribution carefully to understand what contributed what. Performance improvements accounted for roughly a third of the lift—the baseline that enabled everything else. Recommendation quality contributed another third. The remaining third came from email personalization, search improvements, and checkout optimization combined. The lesson: personalization isn't one thing, it's many things working together.

Average order value grew 15% because recommendations surfaced relevant products customers wouldn't have found otherwise. The cross-sell and upsell opportunities that had been missed were now captured.

The AOV improvement was particularly strong in certain categories. For fashion and accessories, where complementary items were natural, we saw above-average increases. For electronics, where customers typically knew exactly what they wanted, the effect was smaller. Understanding these patterns helped HSE24 focus merchandising efforts.

Email click-through rates improved 35%. Not because the emails were sent more often—they weren't—but because they contained products customers actually wanted to see.

The email improvements had a secondary benefit: unsubscribe rates dropped. When emails are relevant, people keep them coming. When emails are spam, people unsubscribe. Better personalization meant more engaged email audiences, which meant lower customer acquisition costs over time.

Cart abandonment dropped 22%. Faster checkout, personalized recovery emails, and an exit-intent system that surfaced relevant incentives at the right moment.

The cart recovery system learned which offers worked for which customers. Some responded to free shipping. Some responded to time-limited discounts. Some just needed a reminder without any offer at all. Personalizing the recovery approach improved recovery rates while reducing unnecessary discounting.

The platform now handles traffic spikes that would have crashed the previous system. Black Friday, TV show tie-ins, flash sales—all smooth. The infrastructure scaled automatically based on load, with capacity to spare even during peak events.

"Nordbeam's team brought deep expertise in both e-commerce development and AI. Their personalization engine has transformed how we connect with customers, driving significant improvements in engagement and sales."

Head of Digital
HSE24

Real-Time Personalization Infrastructure

Personalization at scale requires infrastructure that goes beyond model deployment. The recommendations need to compute in milliseconds, handle traffic spikes gracefully, and update continuously as customer behavior changes.

Request-Time Computation

Feature serving at speed. When a customer loads a page, the personalization system has milliseconds to gather features—recent behavior, purchase history, session context—and compute recommendations. We built a feature store that pre-computes and caches features, serving them in single-digit milliseconds.

Model inference optimization. The recommendation models themselves were optimized for latency. Techniques included model distillation (training smaller models to mimic larger ones), quantization (reducing numerical precision), and batching (grouping requests for efficient GPU utilization).

Graceful degradation. When the personalization system is slow or unavailable, the site still needs to work. We implemented fallback strategies: cached recommendations, popular products, category-based suggestions. The customer experience degrades gracefully rather than breaking entirely.

Real-Time Behavior Processing

Event streaming architecture. Every customer action—page view, product click, search query, add-to-cart—flows through a streaming pipeline. Kafka handles the event ingestion; stream processing updates customer features in real-time.

Session-aware recommendations. The system distinguishes between long-term preferences (derived from historical behavior) and session intent (derived from current browsing). A customer researching vacation gear gets vacation-related recommendations during that session, even if their historical behavior was primarily about kitchen products.

Immediate feedback loops. When a customer clicks a recommendation, that signal flows back within seconds, potentially adjusting subsequent recommendations in the same session. The system learns in real-time, not just in batch.

Handling Traffic Spikes

Auto-scaling infrastructure. HSE24's traffic patterns include massive spikes during TV broadcasts. The infrastructure scales automatically based on load, spinning up additional capacity before predicted spikes and scaling down during quiet periods.

Cache warming. Before high-traffic events, we pre-compute and cache recommendations for likely visitor segments. The spike of incoming traffic hits warm caches rather than cold computation.

Load shedding. Under extreme load, the system sheds lower-priority requests to protect core functionality. Non-personalized fallbacks serve some requests, preserving capacity for customers mid-purchase.

Mobile Optimization Deep Dive

Mobile represented over 60% of HSE24's traffic, yet the mobile experience had more friction than desktop. We addressed this systematically.

Mobile Performance

Network-aware delivery. The site detected connection quality and adjusted accordingly. On 3G connections, we deferred non-essential resources, compressed images more aggressively, and simplified interactive elements.

Prefetching strategies. Based on navigation patterns, the mobile app preloaded likely next pages during idle moments. When the customer actually navigated, the content was already available.

Offline capability. The mobile PWA cached essential content for offline access. Customers could browse products even with spotty connectivity, with actions syncing when connection resumed.

Mobile Checkout

Reduced form friction. Mobile checkout minimized typing: address autocomplete, card scanning, saved payment methods. Every field removed from the checkout flow improved conversion.

Progress preservation. Checkout progress saved continuously. A customer interrupted mid-checkout could resume exactly where they left off, even switching devices.

Payment flexibility. We integrated mobile-native payment methods: Apple Pay, Google Pay, Klarna. One-tap checkout eliminated friction for returning customers.

Analytics and Measurement Framework

Personalization success requires measuring the right things—and measuring them correctly.

Attribution Modeling

Recommendation attribution. When a customer purchases, which recommendation influenced the decision? Simple last-click attribution misses the journey. We implemented multi-touch attribution that credited recommendations throughout the path to purchase.

Incrementality testing. Did personalization cause the purchase, or would it have happened anyway? Holdout groups—customers receiving non-personalized experiences—established the true incremental lift. Without holdouts, you can't distinguish correlation from causation.

Long-term value tracking. Some personalization improves immediate conversion but hurts long-term engagement—aggressive discounting, for instance. We tracked customer lifetime value impacts, not just immediate transactions.

A/B Testing Infrastructure

Traffic allocation. Reliable experiments require proper randomization. We built traffic allocation that ensured customers stayed in consistent test groups across sessions while maintaining statistical validity.

Guardrail metrics. Experiments optimize for primary metrics but monitor guardrails—things that shouldn't get worse even if the primary metric improves. Site performance, customer complaints, unsubscribe rates all served as guardrails.

Statistical rigor. We implemented proper statistical analysis: confidence intervals, power calculations, correction for multiple comparisons. Decisions were made on evidence, not hope.

Real-Time Dashboards

Business metrics visibility. Stakeholders could see key metrics in real-time: conversion rates, average order value, recommendation click-through rates. The dashboards updated continuously, enabling rapid response to issues.

Model performance monitoring. Beyond business metrics, we tracked model-level metrics: latency distributions, feature freshness, prediction confidence. When models degraded, we knew before business metrics reflected it.

Alerting and escalation. Automated alerts fired when metrics deviated from expected ranges. On-call engineers received notifications with context—what changed, when, and likely causes.

Multi-Channel Integration

HSE24's customers engaged across channels—TV, web, mobile app, email. Coherent personalization required integration across all touchpoints.

Unified Customer Identity

Cross-device recognition. The same customer on phone and desktop should see consistent personalization. We implemented identity resolution that connected devices to customers, maintaining continuity across sessions and platforms.

Anonymous to known transitions. Customers often browse anonymously before logging in. We preserved behavioral data across this transition, so recognized customers didn't lose their session context.

Household modeling. Some households share devices or accounts. The system learned to detect multiple user patterns within single identities, adjusting recommendations accordingly.

Channel-Specific Optimization

TV-to-digital journeys. When TV broadcasts drive traffic, the digital experience needs to connect. We surfaced currently-broadcasting products, related items, and show-specific content that maintained continuity from TV to web.

App vs. web differences. Mobile app users behaved differently from mobile web users—they were typically more engaged, returning customers. The personalization adjusted for channel-specific patterns.

Email as relationship builder. Email wasn't just for immediate conversion. Nurture sequences built relationships with customers who weren't ready to buy, maintaining engagement until purchase intent emerged.

What We Learned

This project reinforced lessons that apply to any e-commerce AI initiative.

Data quality determines model quality. We spent a third of the project on data infrastructure before training any models. Teams that skip this step build sophisticated algorithms on shaky foundations.

The temptation to jump straight to ML is strong—it's the exciting part. But garbage in, garbage out applies with full force. We've seen projects that implemented sophisticated neural networks on data that wasn't even properly joined, producing recommendations that seemed random because the underlying data was nonsensical. The boring data work prevents embarrassing results.

Performance is prerequisite. The best recommendations don't matter on a slow site. Fix performance first; it delivers value while you build the AI.

Performance work has another benefit: it builds credibility. Stakeholders can see the site getting faster immediately, which builds trust during the longer AI development cycle. When you eventually ship personalization, the organization has evidence you deliver results.

Test everything. We A/B tested every algorithm change, every UI variation, every email timing tweak. Intuition about what will work is often wrong. Data tells you what's actually happening.

The testing culture was hard to establish. There was pressure to "just ship it" rather than wait for test results. We pushed back repeatedly, and the results justified the patience. Several changes that seemed obviously beneficial turned out to hurt metrics. Several changes that seemed unlikely to help turned out to be winners. Without testing, we would have made wrong calls with confidence.

Personalization isn't just algorithms. The user experience of personalization—how it's presented, how transparent it is, how customers can control it—matters as much as the underlying models.

The same recommendation algorithm can feel helpful or creepy depending on presentation. "Similar products you might like" feels different from "Based on your purchase of X" even when showing identical products. We tested messaging extensively and found significant variations in engagement based on framing alone.

Models need maintenance. Customer behavior changes. Product catalogs change. Seasonal patterns shift. The recommendation system that worked in January needs adjustment by July. We built infrastructure for continuous model monitoring and retraining, not just initial deployment.

The maintenance infrastructure was as important as the initial models. Automated monitoring detected drift in model performance before it became visible in business metrics. Retraining pipelines incorporated fresh data continuously. Alert systems notified engineers when models degraded. The system got better over time instead of slowly rotting.

The HSE24 platform continues to evolve. The personalization engine handles millions of recommendations daily. The A/B testing infrastructure runs experiments continuously. The conversion improvements compound over time as models learn from more data and the team applies insights from experiments. The platform HSE24 runs today is substantially more sophisticated than what we initially built—because we built foundations for continuous improvement, not just a point-in-time solution.

Scaling Beyond Initial Launch

The platform we built was designed for growth, and growth came quickly.

Handling International Expansion

Multi-market personalization. As HSE24 expanded to additional European markets, the personalization engine needed to handle different languages, currencies, and product catalogs. The architecture we built supported this from the start—customer profiles were market-aware, and recommendation models could train on market-specific data or cross-market patterns depending on data availability.

Localized content delivery. Performance optimization needed to work across geographies. CDN deployment expanded to cover new markets. Edge caching strategies adapted to different traffic patterns in different countries. The infrastructure scaled horizontally without requiring architectural changes.

Regulatory compliance at scale. Each market brought additional compliance requirements. The privacy-by-design architecture we established proved its value—adapting to new regulations required configuration changes, not fundamental redesign.

Feature Expansion

Livestream integration. Beyond traditional TV broadcasts, HSE24 explored digital livestreaming. The personalization engine adapted to new engagement patterns—real-time reactions during streams, chat-based product discovery, instant purchase flows. The event-streaming architecture we built handled these new interaction patterns without modification.

Mobile app evolution. The PWA evolved with native app capabilities—push notifications driving personalized engagement, offline mode for catalog browsing, AR features for product visualization. Each addition built on the data infrastructure and personalization foundation we established.

B2B extensions. Corporate gifting and bulk purchase programs required different personalization logic—account-level rather than individual preferences, budget-based recommendations, approval workflows. The flexible architecture accommodated these B2B extensions within the existing platform.

Get in Touch

Let's build something
extraordinary

Have a project in mind? We'd love to hear about it. Drop us a message and we'll get back to you within 24 hours.

Locations

Gothenburg & Malmö, Sweden

Response time

Within 24 hours

Email directly