Why we love Data?
Today, every company is backed by a digital transformation strategy to compete and explore new avenues of growth. Because having a clear data strategy is vital for any digital transformation journey.
There is a lot of data already available with companies, but most of them don’t have the proper know-how and, as a result, are unable to utilize it properly.
So, to deliver quick wins, we at AFour Technologies assist companies in their digital transformation journey by providing cost-effective, scalable, and secure data engineering services.
Services we Provide
We deliver end to end data engineering services or data analytics services. These include but are not limited to data collection, organization and data modelling, visualization, and data governance. We focus on Descriptive and Diagnostic analytics, which enables our customers for Predictive analytics.
Business Problems we Solved for our Customers
– For a customer in the Digital Advertising domain, we manage their analytics platform, which provides multiple data engineering services, analytics services to advertisers, ad agencies, and media companies.
The analytics platform consists of numerous data pipelines delivering Video, Display, and Keyword Search Analytics. On average, ~ 2.5 TB of data is processed per day for 300+ customers, and various insights are delivered. Technologies used for this project are Snowflake as Data Warehouse, Big Data platform consists of AWS EMR clusters for running jobs, AWS Pipelines, and Apache Spark. A superset is used for Analytics & Dashboarding. The messaging platform is built using Kafka. Programming Languages used are Scala, Java, and Python. The monitoring tools used are Grafana, Nagios.
– For one of our customers, we redesigned and built a scalable recommendation engine for Pay-Per-Click online advertising. More than 100,000 small business entities were consuming their services. But, at one point in time, the legacy relational database solution was not scaling-up to support the rapidly growing sources and new dimensions.
Our data engineering services provided a solution, we implemented Apache Kafka as the broker and buffered for all data streams. Python-based data connectors were written to connect the endpoints of the providers. PySpark based scripts were written to extract and transform the data from Kafka.
The data offsets with time stamps were maintained in PostgreSQL db. Cassandra was chosen as a data warehousing solution, and the transformed data was saved in Cassandra tables.
The careful design of RowKey in Cassandra’s table provided an efficient way to fetch large data sets from the Cassandra cluster.
– One of our customers, who is a crucial competitor among the most significant players in Oil & Gas industry, was exploring the options to implement data governance for their millions of proprietary files getting added to Azure Data Lake. The end goal of the data governance solution was to enable useful search & security layers to protect data.
Our data engineering services department solved their query. We developed data governance and metadata management solutions using industry-standard tools like Apache Atlas and Azure Data Catalog gen 2. Evaluated Apache Atlas for data catalogue management and explored the functionality as Azure Data catalogue gen two is built on top of Apache Atlas.
Designed and implemented various Types, Entities & Process for Data Governance using Apache Atlas through the attributes lists to search entities & consume through glossaries.
Demonstrated how attributes and classifications flow from parent entities to child entities, also metadata extraction from Oracle RDBMS through SQOOP & Hive using Atlas Hooks & Bridge mechanism. Overall the solution was developed using Hadoop, Hive, SQOOP, Oracle, and Apache Atlas.
– We partnered with a Digital Wealth Management company with operations in Hong Kong, Singapore, and India to design a scalable and secure real-time analytics solution. The analytics solution was built using Google services like Data Studio, which enables customer representatives to get various insights of their customer portfolios, study the trends and compare multiple instruments to reduce the portfolio risk and hence deliver better growth.
The Team Composition
Our Data Architects have overall 15+ years of experience. They are well versed in designing complex, scalable, and secured data analytics solutions using Azure Databricks, Apache Spark, Azure Data Factory, and Azure Data Lake.
Our Data Analysts are well-versed with various data modelling techniques and have expertise in data analytics and visualization tools like Power BI, Tableau, Google Data Studio.
Tools and Technologies, we Use
It is very vital to choose the right set of technology options that can support scalability and are secured. Each customer needs a carefully devised data strategy based on various business factors.
Often it is very costly to make significant changes once the solution is in place. Hence, we ensure to use the best suitable analytics solutions from AWS, Azure, and Google Cloud Platforms.
Data warehouse/Storage – Azure SQL, Amazon Redshift, Hive, HDFS, Amazon S3, Azure Blob
Data Platforms – Azure Databricks, HDInsight, Amazon EMR
Data Ingestion – Kafka, RabbitMQ, Azure Event Hub
Data Analytics – Apache Spark, Azure Data Lake
Data Visualization – Power BI Desktop, Power BI Embedded, Tableau, Google Data Studio, Apache Superset
Cluster Management – Hadoop Yarn, Kubernetes
Languages – U-SQL, Python, Scala, PySpark, NumPy, Pandas, Scikit