Enterprise Solutions

A powerful data solution for efficient enterprise resource planning

Making big data analytics work at 1+ million user capacity

Our client’s backstory

Our client is a market leader in sleep innovation and bed manufacturing, powering the sleeping experience with smart bed technology designed to enhance users' sleep and mental well-being.

Besides serving 1+ million satisfied customers, the company counts thousands of employees who track all the different processes, from sales to delivery. All operations are run on several large enterprise systems that track inventory, transportation, consumer insights, and more. Each of these systems, however, solves a narrow domain of problems and functions as an isolated silo of data.

The company needed a comprehensive solution and a scalable architecture to process all kinds of company data, focusing on the billions of data coming from users’ IoT devices.

As Klika has been part of the client’s core development team for years, we set up a team to define the best tools, technologies, and processes to store, automate, and ingest data from all the different sources.  

The challenges

Dealing with loads of data daily required a straightforward process and separating relevant and irrelevant raw data to make reliable data pipelines that would bring in valuable data insights.

The second challenge was automation and getting all the data to the right place at the right time. Assessing which data are to be moved, where, and how they will be stored was pivotal before creating automated data pipelines.

Big data setups require a lot of resources and tools, and companies are often limited to handling the distribution, speed, and variety of vast data. Therefore, we needed to act per the company’s capacities and make sure our solutions provided precise and accurate data analytics that delivered real-time insights at any given time. 

Klika solution

First, we decided on the right roles and expertise needed to build a robust solution that would later serve to create an entirely new platform. Software developers, data engineers, and data scientists were the core team members, and we started with a big data architecture design that could support a complex and large infrastructure.

To set up the proper foundation for any data extraction, we developed a well-designed, secure, and reliable ETL and ELT architecture to clean, filter, and structure data into a readable and insightful format. The data we process comes from IoT devices, mobile apps, and various company systems.

We used ETL for well-structured data and ELT for more complex data sets that require real-time processing.

After that, we automated data processing and set up pipelines to transform raw data into business intelligence. To make the experience exciting and accessible for non-tech-savvy users, we created clean dashboards and visualization models for reports, enabling deeper insights into user behavior and feature performance.

To make the most out of the big datasets, we employed Machine Learning, AI, and Deep Learning to train the data and recognize patterns, helping us build predictive models that can tell a lot about future user behavior.

Since big data storage is always challenging, we developed an on-prem data warehousing and a cloud strategy for data storage and ingestion. In this way, we were able to gain the most out of both. Efficient resource utilization and optimization were vital in keeping the company’s software expenses at a reasonable level and still giving them a long-term solution that would serve as a basis for informed decision-making and deep insights into customer behavior and business operations. 

Results

After everything was up and running, our custom solution started to play a more significant role in the company. It was used as a basis for developing other crucial applications and systems.

Technology stack

ADF, Azure Databricks, Azure Blob tables, ADLS, Power BI, Snowflake Data, MS Reporting and Services