BECOME A DATA ENGINEER WITH THE RETRAINING Big Data is taking the world by storm

Big Data is taking the world by storm! Countless benefits, emerging trends, and promising opportunities—inspiring, isn’t it?
Want to join the Big Data Team at SoftServe?


WE OFFER

  • Salary during the retraining period
  • 2 months of tailor-made learning developed to help you obtain valuable skills in the short term. On completion of the training, you will be involved in a real project as a Data Engineer
  • Support from domain experts—2 one-hour meetings each week to raise issues of concern and seek advice
  • An opportunity to pass any of GCP Data Engineering, Azure DP-203, or AWS Data Analytics certification for free
  • Access to Qwiklabs, so you can learn the cloud using the real thing—no simulations
  • Access to Udemy—an online learning and teaching marketplace
  • Access to platforms with e-books necessary for your professional development
  • An opportunity to practice on a real Big Data project
  • An opportunity to take part in OpenTech initiative and use obtained knowledge working on tech solutions in the social sphere

YOUR JOURNEY WITH SOFTSERVE STARTS NOW

1

Meet the requirements and Apply for the program

2

Pass the Interview

3

Accept the offer and join SoftServe team

4

Learn, pass the Retraining Program under the guidance and lead of SoftServe Big Data experts

5

Pass any of the following certifications:
• GCP Data Engineering
• Azure DP-230
• AWS Data Analytics

6

Start working on your 1st Big Data project as Data Engineer

GCP for Big Data

Within this program, practical tasks are to be completed on Qwiklabs. During your studies, you will explore the Big Data world—data warehousing, batch processing, stream processing, NoSQL and GCP storage, and much more! You will get familiar with the big data capabilities of Google Cloud, consider use cases for each type of data storage, find out the available data lake and warehouse solutions on GCP, and get some practice with storing data in Cloud Datastore. You will learn how to create and run an Apache Airflow workflow in Cloud Composer and how to transform your data warehouse using BigQuery. Talking about data storage and processing, you will create a storage bucket, upload objects to the bucket, create folders and subfolders in it, and make objects publicly accessible using the Google Cloud command line. In addition, you will get hands-on experience building data pipeline components on GCP and executing Spark on Cloud Dataproc. Serverless data processing with Cloud Dataflow, building streaming data pipelines, and including machine learning in data pipelines on GCP are important parts of the learning path.

Experts:

Oleksandr Zhukov, Big Data Architect

Oleksandr is an expert-level Python Software Engineer with strong algorithmic background and more than 8 years of commercial experience in IT. His areas of interest are distributed systems and distributed computations, system programming, Big Data, SaaS/PaaS/IaaS services, algorithms, and data structures. Oleksandr proved his experience passing numerous certifications—AWS Certified Developer Associate, Google Certified Professional—Data Engineer, Certified Cloud Architect, and Certified Kubernetes Application Developer.
Taras Kloba, Big Data Engineering Manager

Taras has more than 10 years of experience in the IT sector and domain areas: banking, online advertising, and gambling. The main goal is to help make effective business decisions based on working with data. Getting the most out of Big Data, companies better understand their industry and customers, analyze their behavior, and use it as a trend detection tool, making them more competitive. In 2016, Taras joined the international PASS community, which organizes SQL Saturday conferences in many countries around the world. He later began to help organize similar conferences in Lviv. The following year, Taras became one of the leaders of the PostgreSQL Ukraine community. He organizes meetings with world-renowned experts in Ukraine and takes an active part in the development of the Lviv data community.

AWS for Big Data

Within this program, you will learn how to collect, store, process, analyze, visualize, and protect data using Amazon Web Services. You will get familiar with the Amazon Kinesis Data Streams service and with Amazon S3. In addition, you will learn more about a fully managed NoSQL database service.
Speaking of data processing, you will become acquainted with a serverless ETL service that crawls your data, builds a data catalog, and performs data cleansing, data transformation, and data ingestion to make your data immediately query-able. In addition, you will get basic knowledge on how to set up a secure data lake, and on how to run and scale Apache Spark, Hive, Presto, and other big data frameworks with Amazon EMR.
Moreover, you will get some hands-on experience with real-time data transformation and analysis, learn how to use Amazon Elasticsearch Service to easily deploy, manage, and scale Elasticsearch cost-effectively.
Data visualization with the use of Amazon QuickSight, server-side, and client-side data encryption are also compulsory parts of the curriculum.

Experts:

Yulia Sholohon, Lead Big Data Engineer

Yuliia is a Lead Big Data Engineer in AWS Big Data & Analytics Team at SoftServe. Yuliia has been working in IT since 2013. She is experienced in various data technologies: Relational Databases, NoSQL, Big Data. The last 5 years Yuliia has been working closely with different AWS Services, Spark, Hadoop, and other Big Data technologies. Yuliia is AWS certified specialist, she has passed all data-centric certifications—AWS Solution Architect Associate, AWS Data Analytics Specialty, AWS Database Specialty. Also, she is Microsoft Certified SQL Server Professional, Oracle Certified Associate, and Java SE 8 Programmer.

Power BI for Big Data

During your studies, you will get familiar with the different tasks of a Data Analyst. You will explore Power Query by learning how to extract data from different data sources, choose a storage mode and connectivity type, clean and prepare your data for analysis.
The program provides handy tips on how to retrieve data from a wide variety of data sources, including Microsoft Excel, relational databases, and NoSQL data stores. You will also learn how to improve performance while retrieving data.
You'll walk away from this course with knowledge of DAX language and ability to create a wide variety of analytic solutions. Additionally, you’ll learn how to improve performance with your Power Query data retrieval tasks.
Talking about visualization, you will get helpful advice on visuals suitable for certain solutions. In addition, you will learn how to design a data model that is intuitive, high-performing, and simple to maintain. Moreover, you will get basic knowledge on how to sort data and how to present the report in a cohesive manner. You will learn how to get a statistical summary for your data and export data from Power BI. You will also apply and perform advanced analytics on the report for deeper and more meaningful data insights.
You will dive deeper into reports preparation. Furthermore, you will figure out how the suite of Power BI tools and services is used by a Data Analyst to tell a compelling story through reports and dashboards. You will get some hands-on experience with the creation of workspaces in the Power BI service and connecting Power BI reports to on-premise data sources.
Both theoretical and practical parts of the learning path can help you prepare for the Microsoft Certified: Data Analyst Associate certification.


Experts:

Taras Kloba, Big Data Engineering Manager

Taras has more than 10 years of experience in the IT sector and domain areas: banking, online advertising, and gambling. The main goal is to help make effective business decisions based on working with data. Getting the most out of Big Data, companies better understand their industry and customers, analyze their behavior, and use it as a trend detection tool, making them more competitive. In 2016, Taras joined the international PASS community, which organizes SQL Saturday conferences in many countries around the world. He later began to help organize similar conferences in Lviv. The following year, Taras became one of the leaders of the PostgreSQL Ukraine community. He organizes meetings with world-renowned experts in Ukraine and takes an active part in the development of the Lviv data community.

Iryna Znak, Senior DW/BI Engineer

Iryna is a Business Intelligence Engineer with over seven years of IT experience in data visualization, data storage design, and data mining. Iryna has core skills in leveraging BI visualization tools using SQL query language for support in strategic business decisions. In addition, she is experienced in implementing improved data analytics, data structures, and other data-related strategies with successful track records in the utilization of data visualization.
Iryna proved her experience by having successfully passed the certifications: Tableau Desktop Specialist exam;70-461: Querying Microsoft SQL Server; 70-778: Analyzing and Visualizing Data with Microsoft Power BI.

Iryna Vilchynska, Senior DW/BI Engineer

Iryna is a certified PowerBI and Azure expert experienced in DWH development, ETL, data analysis, introducing self-service capabilities, and implementation of holistic analytical solutions for different business domains. In addition, Iryna has hands-on experience with all market-leading BI tools. She was responsible for BI bootcamp and currently, she leads the PowerBI team on one of the largest SoftServe projects.


Viktoriia Belmeha, Senior DW/BI Engineer

Viktoriia is a Business Intelligence Engineer with about 6 years of experience in data analysis, data storage design, and data mining. She leverages BI visualization tools for support in strategic business decisions. Viktoriia has lots of experience working with Power BI, Tableau, Grafana, and other BI tools to identify, analyze, and interpret trends or patterns in complex data sets. Moreover, she has successfully passed the Power BI DA-100 exam, using her practical experience.

Azure for Big Data

Within this program, you will get familiar with the basics of Azure database services. You will dive deeper into Azure Data Factory, learn how to integrate data, monitor data, and execute data pipelines.
The program provides handy tips on how to analyze large data sets and gives you an understanding of the inner workings of Apache Spark. Furthermore, you will learn the basics and best practices for work with the Databricks analytics platform and the Azure Databricks analytics service. Moreover, you will get familiar with Azure Synapse Analytics—realize integrated analytical solutions, work with data warehouses, perform data engineering, import data, and perform other activities that will improve your expertise.
You'll work with Azure Data Lake Storage and learn how to process your Big Data efficiently and easily, how to configure container, directory, and file-level access to your data.
Implementation of data streaming solutions with Azure Event Hubs and Azure Stream Analytics are also compulsory parts of the curriculum.

Experts:

Andriy Zabavskyy, Solutions Architect

Data Architect and data warehouse expert with over 15 years of experience in design and implementation of intensive data processing and large-scale decision support systems, including operational intelligence and advanced analytics solutions. Extensive expertise in establishing and implementing enterprise-level data architecture for different business domains, applying proven design methodologies & adopting new cutting-edge technologies and approaches such as NoSQL, massively parallel processing, streaming and cloud computing.
Gleb Smolnik, Lead Big Data Engineer

Gleb has about 7 years of commercial experience working on data-related projects. Gleb has extensive experience with Azure cloud platform and is Certified Azure Data Engineer Associate. Having successfully passed 70-473, DP-200, DP-201, and DP-203 exams, he has used the best of his knowledge to compile this course. Azure, Spark, Python, Managed Services are among his areas of interest
Taras Ozarkiv, Lead DW/BI Engineer

Taras has 7 years of experience in the IT sphere. His main areas of expertise are Design and Development of Business Intelligence solutions, Power BI, ETL, Modern Data Warehouse. Taras implemented lots of large enterprise projects using Azure cloud platform and Microsoft BI tools. His involvement includes development of full application design from scratch, work on requirements analysis and specification, issues detection, analysis and resolving, testing, performance assessment, code improvement, and technical leadership provision. Taras performs the role of a Leader of the Lviv Data Platform User Group (former Lviv SQL Server UG) community and is also one of the runners of SQL Saturday in Lviv.



REQUIREMENTS FOR CANDIDATES

  • Software Engineer—Python, Java, or Scala developer—with two or more years of experience (back-end development)
  • Skilled in SQL
  • Showing understanding of data engineering key principles and concepts
  • Showing exposure to cloud services
  • An owner of good communication skills
  • Demonstrating Upper-Intermediate English level

General information

Duration
accelerate

Two-months course

Cost
accelerate

Salary while learning

Support
accelerate

Experts and tutor

Certifications
accelerate

Cloud Certification of your choice for free

FEEDBACK

Oleksiy Fedorchuk

"I was one of the first students on the GCP retraining program.
Overall, I got a comprehensive understanding of data storage theory (what is data lake, data warehouse, where to use normalization, where we shouldn’t, etc.) and basics of how exactly this map to GCP. I went through major GCP services and got the idea in short workshops why this service exists and what are the benefits and drawbacks.
The dominant part of the program is Google training courses on GCP and Udemy courses on theory. You will not be a guru after the course, but you will get a solid core, which allows to understand if data engineering fits you or not.
To sum up, I do recommend this course as your first step towards data engineering."

Tina Palii

"I am studying on the Big Data retraining program now, and I can only say good things. Our architects compiled an extremely informative and flexible program that allows you to obtain the necessary knowledge for further work as Big Data engineer. The ability to walk this path with a mentor makes it less thorny and more interesting. This is especially important if you move from a completely different area and must cover a lot of information."

Ivan Pokhvalit

"The program includes various cloud-related topics and covers a lot of material (both theoretical and practical). So, it takes a little more time than it may seems at first glance. Having a mentor is an excellent bonus of this retraining. Mentor answers all your questions, corrects, and directs in your studies."

Igor Bulenko

"By now, I have completed half of the course. And so far, I can say that the program is very thought-out and agile. My mentor has been very helpful and supports my studies, answers all my questions. I am satisfied I enrolled on this course."

Sergii Vanzha

"In general, everything is cool, a lot of material that covers theory and practice. Access to the sandbox is a nice bonus as well. It is very convenient to have a mentor who can answer all accumulated questions. So, as for me, this is the best format for someone who would like to start working with cloud and learn according to their own pace and schedule."

FAQs

How long does the studies take?

It takes 2 months to complete the Big Data Retraining program.

What is the price for Retraining Program?

Big Data Retraining program is free of charge. In addition, you receive the salary while learning.

Will I work as Big Data engineer upon completion of the Retraining Program?

Yes. First, you join SoftServe, complete the Retraining program, and get certified, then you pass project interviews for the Big Data Engineer position. We have more than 50 open vacancies for now—Vacancies at SoftServe (softserveinc.com)

What is the curriculum?

The program covers Big Data fundamentals, theoretical aspects as well as practical tasks to be completed on Qwiklabs.
During your studies, you will dive deep into the Big Data world—data warehousing, batch processing, stream processing, NoSQL and GCP storages! You will also have 2 one-hour meetings each week with SoftServe Big Data experts to raise issues of concern and seek advice.
Upon completing each module on Udemy and practical tasks on Qwiklabs, you will have a chance to pass a test that will prepare you for the GCP Professional Data Engineering certification.

Do I need to come to the office?

The studies are online. You can complete the course from any location. SoftServe offers remote work possibilities as well.

Available groups for registration

Course name

Meet your mentors

Dmytro Minochkin

Dmytro Minochkin

Softserve Academy Mentor

Ukraine

10+ years of professional experience in IT
Certificates: Google Cloud Professional Cloud Architect, Google Cloud Professional Data Engineer
Technologies: DevOps, Google Cloud Platform, Amazon Web Services, Java/Kotlin, SwiftUI, HTML/CSS/JS, TypeScript
LinkedIn: https://www.linkedin.com/in/dmytro-minochkin-019580142/

Nataliia Romanenko

Nataliia Romanenko

Softserve Academy Mentor

Ukraine

9 years of professional experience in IT
Technologies: Java, QC, ATQC
LinkedIn: https://www.linkedin.com/in/natali-romanenko-04430118/

Sign up to get informed about course launch:

Call us

Mon - Fri 10:00 - 19:00

Toll-free (Ukraine)

GOT QUESTIONS?