Enroll Here: Spark Fundamentals II Cognitive Class Exam Quiz Answers
Spark Fundamentals II Cognitive Class Certification Answers
Module 1: Introduction to Notebooks Quiz Answers – Cognitive Class
Question 1: Which of the following statements about Zeppelin Notebook is NOT true?
- Zeppelin is open-source.
- With Zeppelin, you can run code and create visualizations through a web interface.
- Zeppelin comes configured with Scala, Spark, and Julia.
- Zeppelin is an interactive data analytics tool started by NFLabs.
Question 2: Jupyter Notebook and Data Scientist Workbench are both open-source projects. True or false?
- False
- True
Question 3: Which notebook will you use in the lab section of this course?
- DataBrick
- Zeppelin
- Watson Studio
- Jupyter Notebook
Module 2: RDD Architecture Quiz Answers – Cognitive Class
Question 1: Which of the following statements is NOT true?
- Partitioning is what enables parallel execution of Spark jobs.
- An RDD is made up of multiple partitions.
- Spark normally determines the number of partitions based on the size of the hard drives in your cluster.
- Spark is able to read from many different data stores in addition to HDFS, including the local file system and cloud services like Cloudant, AWS, Google, and Azure.
Question 2: In the example of an RDD with 3 partitions and no partitioner, which of the following statements is true?
- It is better not to partition an RDD if you need to join it multiple times.
- Joining RDDs with no partitioner will cause each executor to shuffle all values with the same key to a single machine
- Repeatedly joining on the same RDD is highly efficient.
- The keys are co-located.
Question 3: Speculative execution handles slow tasks by re-launching them as necessary. True or false?
- False
- True
Module 3: Optimizing Transformations and Actions Quiz Answers – Cognitive Class
Question 1: Which of the following statements is true?
- MapValues applies a map function to each value and performs repartitioning.
- GroupByKey groups all values by key from all partitions into memory.
- GroupByKey shuffles everything and it operates efficiently on large datasets.
- CountByKey is designed to be used in production.
Question 2: AggregateByKey is better than GroupByKey when we want to calculate the average value for each key in an RDD. True or false?
- False
- True
Question 3: Which of the following statements is NOT true?
- MapValues tells Spark that the hashed keys will remain in their partitions and we can keep the same partitioner across operations
- In the example of a pair RDD with 2 partitions, running a map operation over all records will leave the keys of each record unchanged.
- AggregateByKey splits the calculation into two steps. Only one pair per key, per partition is shuffled.
- GroupByKey causes a shuffle of all values across the network, even if they are already co-located within a partition.
Module 4: Caching and Serialization Quiz Answers – Cognitive Class
Question 1: Which of the following statements is true?
- When you no longer need the persisted RDD, Spark will automatically make room for new RDDs.
- Persisting to disk would allow us to reconstitute the RDD in the event a partition is lost, instead of re-computing all the expensive operations for the lost partitions.
- Ideally we want to persist before any pruning, filtering, or other transformations needed for downstream processing.
- Persisting RDDs can help us save time re-computing partitions, and persistence is in-memory only.
Question 2: Which of the following statements is NOT true?
- Serialization has the added benefit of helping with garbage collection, as you’ll be storing 1 object versus many small objects for each record.
- The records of an RDD will be stored as one large byte array.
- There is almost no CPU usage to deserialize the data.
- Serialization helps by saving space that persisting RDDs occupy in memory.
Question 3: The Java serializer can store the entire RDD in less space than the original file. True or false?
- False
- True
Module 5: Develop and Testing Quiz Answers – Cognitive Class
Question 1: Which of the following statements is true?
- We cannot use sbt for an Eclipse project.
- We cannot create builds directly from the console using sbt.
- sbt automatically finds source and library files using a conventional directory structure.
- Maven is more powerful and customizable than sbt.
Question 2: IntelliJ fully supports sbt build files with no conversions required. True or false?
- False
- True
Question 3: Which of the following statements is NOT correct during unit testing?
- The spark-testing-base package is handy for testing.
- We want to test the code that is actually used in our application.
- We should not use unit testing tools like scalatest.
- We should put transformations for a given RDD in its own object or class.
Spark Fundamentals II Final Exam Answers – Cognitive Class
Question 1: Which of the following web-based notebooks is built around Jupyter and iPython?
- Data Scientist Workbench
- Spark Notebook
- Databricks Cloud
- Zeppelin
Question 2: What defines a stage boundary?
- Repartition
- Action
- Transformation
- Shuffle dependency
Question 3: What does RDD stand for?
- Resilient Distributed Dataset
- Reusable Distributed Dataset
- Reusable Data Directory
- None of the above
Question 4: Coalesce can reduce the number of partitions without causing a shuffle. True or false?
- True
- False
Question 5: Which operation should you use to map the values in a pair RDD without affecting the keys or partitions?
- map
- mapValues
- map or mapValues
- You cannot map a pair RDD without affecting the keys or partitions.
Question 6: How can you view the lineage of an RDD?
- showLineage()
- toDebugString()
- printHistory()
- printGraph()
Question 7: Adding a key to an RDD will automatically repartition it so that the keys are co-located. True or false?
- True
- False
Question 8: How can you reference an external class in a closure without serializing it?
- define it as transient
- define it as lazy
- Both of the above
- None of the above
Question 9: What does Spark do during speculative execution?
- Spark looks for tasks it expects to be short and runs them first
- Spark dynamically allocates more resources to large tasks
- Spark identifies slow-running tasks and restarts them
- None of the above
Question 10: What does the following code do?
val text = sc.textFile(“SomeText.txt”)
val counts = text.flatMap(_.split(” “)).map((_, 1)).reduceByKey(_ + _).collectAsMap()
- Counts the total number of words in the document
- Counts the number of distinct words in the document
- Maps every word in the document to the number of times it occurs
- None of the above
Question 11: Which operation has the highest chance of causing out-of-memory errors if the dataset is really large?
- countByValue
- groupByKey
- reduceByKey
- map
Question 12: What is the result of this code?
val pairs = sc.parallelize(List((“a”, 1), (“a”, 5), (“b”, 6), (“b”, 3), (“c”, 2)))
val results = pairs.reduceByKey((a, b) => {
a > b match {
case true => a
case false => b
}
}).collectAsMap()
- (“a” -> 5, “b “ -> 6, “c” -> 2)
- (“a” -> 6, “b “ -> 9, “c” -> 2)
- (5, 6, 2)
- None of the above.
Question 13: You can execute asynchronous actions with the default FIFO scheduler. True or false?
- True
- False
Question 14: Which of the following statements about broadcast variables is true?
- They are read-only
- They can eliminate shuffles
- They are shared between workers via the peer-to-peer protocol
- All of the above
- None of the above
Question 15: With the MEMORY_ONLY storage level, what happens when an RDD can’t fit in memory?
- Spark will automatically change the storage level to MEMORY_AND_DISK
- Some of the partitions will not be cached
- Some of the partitions will be spilled to disk
- Spark will throw an OOM error
- None of the above
Question 16: How can you reduce the amount of memory used by persisted RDDs?
- Use primitive types instead of Java or Scala collections and nested classes
- Enable compression
- Use Kryo serialization instead of Java
- All of the above
- None of the above
Question 17: Which point in an RDD lineage is the best to persist?
- Before a reduceByKey operation
- After outputting to disk
- After a lot of transformations for downstream computations, such as filtering or joining
- At the root RDD
- None of the above
Question 18: A pool can have its own scheduler. True or false?
- True
- False
Question 19: In the event of a failure, how can Spark recover a lost partition?
- Find the last good state in the RDD lineage and recompute the lost partition.
- Restart from the root RDD
- Find the last good state in the RDD lineage and recompute every task.
- Spark’s fail-safes ensure that failures will never occur.
- None of the above.
Question 20: Which of the following IDEs fully supports SBT?
- Eclipse
- IntelliJ
- Both Eclipse and IntelliJ
- None of the above
Introduction to Spark Fundamentals II
In the context of Apache Spark, understanding Spark Fundamentals II involves delving into more advanced topics and features beyond the basics. Spark is a fast and general-purpose cluster computing framework for big data processing. Below are some advanced Spark concepts and features:
1. DataFrames and Datasets:
- DataFrames: DataFrames in Spark represent distributed collections of data organized into named columns. Understanding DataFrame operations, optimizations, and transformations is crucial. You can perform operations similar to SQL queries on DataFrames.
- Datasets: Datasets are a more type-safe and object-oriented version of DataFrames. They provide the benefits of static typing and compile-time safety while allowing powerful functional programming constructs.
2. Structured Streaming:
- Spark Structured Streaming is a high-level API for stream processing that builds on Spark SQL. It allows you to express complex streaming aggregations using a SQL-like language.
3. Window Operations:
- Window operations in Spark are used for calculations over a sliding window of data. Understanding window functions and their application in time-series analysis and streaming scenarios is essential.
4. Broadcast Variables and Accumulators:
- Broadcast Variables: These are read-only variables cached on each machine rather than being sent over the network with tasks. They are useful for efficiently sharing large, read-only data structures.
- Accumulators: These are variables that can only be added through an associative and commutative operation and can be efficiently supported in parallel. They are used to accumulate results across the workers.
5. Caching and Persistence:
- Understanding how to efficiently cache and persist RDDs and DataFrames in Spark can significantly improve performance, especially in iterative algorithms.
6. Spark Job Tuning:
- Tuning Spark jobs involves optimizing the configuration parameters, partitioning strategies, and memory management to achieve better performance. Knowledge of the Spark UI and monitoring tools is crucial for identifying bottlenecks.
7. Optimizing Data Serialization:
- Choosing the right data serialization format (e.g., Avro, Parquet) can impact performance. Optimizing serialization is important, especially in scenarios with large-scale data processing.
8. GraphX:
- GraphX is Spark’s API for graphs and graph-parallel computation. Understanding how to model and process graphs using GraphX can be valuable for certain types of analytics and machine learning tasks.
9. MLlib (Machine Learning Library):
- MLlib is Spark’s machine learning library. Understanding advanced concepts such as feature engineering, hyperparameter tuning, and pipeline construction is crucial for building sophisticated machine learning models.
These advanced Spark concepts and features are essential for mastering Spark and leveraging its capabilities to address more complex data processing and analytics challenges. Continuously exploring Spark documentation, tutorials, and real-world use cases will contribute to a deeper understanding of these advanced topics.