Ir al contenido principalSkip to Xpert Chatbot

IBM: Big Data, Hadoop, and Spark Basics

4.5 stars
42 ratings

This course provides foundational big data practitioner knowledge and analytical skills using popular big data tools, including Hadoop and Spark. Learn and practice your big data skills hands-on.

Big Data, Hadoop, and Spark Basics
6 semanas
2–3 horas por semana
A tu ritmo
Avanza a tu ritmo
Gratis
Verificación opcional disponible

Hay una sesión disponible:

¡Ya se inscribieron 15,137! Una vez finalizada la sesión del curso, será archivadoAbre en una pestaña nueva.
Comienza el 12 nov

Sobre este curso

Omitir Sobre este curso

Organizations need skilled, forward-thinking Big Data practitioners who can apply their business and technical skills to unstructured data such as tweets, posts, pictures, audio files, videos, sensor data, and satellite imagery, and more, to identify behaviors and preferences of prospects, clients, competitors, and others. ****

This course introduces you to Big Data concepts and practices. You will understand the characteristics, features, benefits, limitations of Big Data and explore some of the Big Data processing tools. You'll explore how Hadoop, Hive, and Spark can help organizations overcome Big Data challenges and reap the rewards of its acquisition.

Hadoop, an open-source framework, enables distributed processing of large data sets across clusters of computers using simple programming models. Each computer, or node, offers local computation and storage, allowing datasets to be processed faster and more efficiently. Hive, a data warehouse software, provides an SQL-like interface to efficiently query and manipulate large data sets in various databases and file systems that integrate with Hadoop.

Open-source Apache Spark is a processing engine built around speed, ease of use, and analytics that provides users with newer ways to store and use big data.

You will discover how to leverage Spark to deliver reliable insights. The course provides an overview of the platform, going into the different components that make up Apache Spark. In this course, you will also learn how Resilient Distributed Datasets, known as RDDs, enable parallel processing across the nodes of a Spark cluster.

You'll gain practical skills when you learn how to analyze data in Spark using PySpark and Spark SQL and how to create a streaming analytics application using Spark Streaming, and more.

De un vistazo

  • Language English
  • Video Transcript English
  • Associated skillsApache Hive, Resilience, Pyspark, Nodes (Networking), Parallel Processing, Big Data, Unstructured Data, Analytical Skills, Spark Streaming, SQL (Programming Language), Satellite Imagery, Apache Hadoop, Apache Spark, File Systems, Data Warehousing

Lo que aprenderás

Omitir Lo que aprenderás

"After completing this course, a learner will be able to..."

  • Describe Big Data, its impact, processing methods and tools, and use cases.
  • Describe Hadoop architecture, ecosystem, practices, and applications, including Distributed File System (HDFS), HBase, Spark, and MapReduce.
  • Describe Spark programming basics, including parallel programming basics, for DataFrames, data sets, and SparkSQL.
  • Describe how Spark uses RDDs, creates data sets, and uses Catalyst and Tungsten to optimize SparkSQL.
  • Apply Apache Spark development and runtime environment options.

Plan de estudios

Omitir Plan de estudios

Module 1 – What is Big Data?

___Introduction to Big Data_ *

o What is Big Data?

o Impact of Big Data

o Parallel Processing, Scaling, and Data Parallelism

o Tools of Big Data

o Beyond the Hype

o Big Data Use Cases

o Viewpoints about Big Data

Module 2 – Introduction to the Hadoop Ecosystem

___Introduction to the Hadoop Ecosystem_ *

o What is Hadoop

o An introduction to MapReduce

o The Hadoop Ecosystem/Common components: Introducing HDFS, Hive, HBase, and Spark, other modules

o Working with HDFS

o Working with HBase

o Lab: MapReduce

Module 3 – Introduction to Apache Spark

___Introduction to Apache Spark_ *

o Why use Apache Spark?

o Functional Programming Basics

o Parallel Programming using Resilient Distributed Datasets

o Scale-out / Data Parallelism in Apache Spark

o DataFrames and SparkSQL

o Lab: Practical examples with PySpark

Module 4 – DataFrames and SparkSQL

___DataFrames and SparkSQL_ *

o Introduction to Data-Frames & SparkSQL

o RDDs in Parallel Programming and Spark

o Data-frames and Datasets

o Catalyst and Tungsten

o ETL with Data-frames

o Lab: ETL with Data-frames

o Real-world usage of SparkSQL

o Lab: SparkSQL

Module 5 – Development and Runtime Environment options

___Development and Runtime Environment options_ *

o Apache Spark architecture

o Overview of Apache Spark Cluster Modes

o How to Run an Apache Spark Application

o Using Apache Spark on IBM Cloud

o Lab: Scale-out on IBM Spark Environment in Watson Studio

o Setting Apache Spark Configuration

o Running Spark on Kubernetes

o Lab: Spark on Kube

Module 6 – Monitoring & Tuning

___Monitoring and tuning Apache Spark_ *

o The Apache Spark User Interface

o Monitoring Jobs

o Debugging of parallel jobs

o Understanding Memory resources

o Understanding Processor resources

o Lab: Monitoring and Performance tuning

Module 7 – Final Quiz ****

Este curso es parte del programa Data Engineering Professional Certificate

Más información 
Instrucción por expertos
14 cursos de capacitación
A tu ritmo
Avanza a tu ritmo
1 año 2 meses
3 - 4 horas semanales

¿Te interesa este curso para tu negocio o equipo?

Capacita a tus empleados en los temas más solicitados con edX para Negocios.