Data Science | AI | DataOps | Engineering
backgroundGrey.png

Blog

Data Science & Data Engineering blogs

Practical Azure Databricks & Practical Azure Data Factory (May 2020 Lyngby Denmark)

We have partnered up again with our good friends at Orange Man in Denmark to deliver our Azure Databricks Training and also our Azure Data Factory Training. Both sessions were sold out last year. So we are pleased to be able to offer the same courses, although significantly updated with the latest and greatest features.

We are running both back to back in May in Denmark from the 25th of May 2020 to the 28th of May.

We will kick off with 2 days on Azure Data Factory. Looking at a practical approach. What we mean by that, is we take an applied approach to teaching. We work with customers in all verticals delivering bespoke advanced analytics solutions. From this work, we have gained an understanding of where people struggle the most and what they really need to know to get started. We continuously evolve our courses, ensuring we are teaching the latest advances in the platform.

OM_DataFactory.png

Practical Data Factory

In the first 2 days we will look at how you load data with Azure Data Factory. We will cover the basics of loading right through to exploring metadata driven approaches which will accelerate data from source in to a data lake.

“As cloud platforms expand in scale and breadth, there is growing need for an orchestration tool that can bridge the gaps between distributed services. Azure Data Factory provides this glue, pulling together services into a coherent data preparation and transformation pipeline. However, many people make the leap from on-premises SSIS and use Data Factory in the same way – this will get you so far, but successful Data Factory developers write less code, reuse components and harness the emerging Data Flow technologies.

This two day course takes the Data Factory novice, runs them through the fundamentals before taking them on a journey to building code-efficient, agile orchestration solutions. We will look at some of the most common scenarios, including pulling on-premises data into the cloud, hosting SSIS packages and communicating with Web APIs.”

OM_Databricks.png

Practical Databricks

Once data has been loaded in to the cloud, we will need a way to process it at scale. Azure Databricks gives us the flexabilty to do this at scale. We will take you through the basics, explore how to read data in Apache Spark, how to write data, querying, how to apply common Data Warehouse patterns, performance tune and much, much more.

“On the first day we will introduce Azure Databricks then discuss how to develop in-memory elastic scale data engineering pipelines. We will talk about shaping and cleaning data, the languages, notebooks, ways of working, design patterns and how to get the best performance. You will build an engineering pipeline with Python, then with Scala via Azure Data Factory, then we’ll get it into context in a full solution. We will also talk about Data Lakes – how to structure and manage them over time in order to maintain an effective data platform.

Second day we will then shift gears, taking the data we prepared and enriching it with additional data sources before modelling it in a relational warehouse. We will take a look at various patterns of performing data engineering to cater for scenarios such as real-time streaming, de-centralised reporting, rapidly evolving data science labs and huge data warehouses in specialised storage such as Azure SQL Data Warehouse. By the end of the day, you will understand how Azure Databricks sits at the core of data engineering workloads and is a key component in Modern Azure Warehousing.“

We hope to see you there in December!