What is Python?
Python is a popular programming language. It was created in 1991 by Guido van Rossum. It is an open source software.
It is used for:
- Data Analysis
- Web Development (server-side),
- Software Development,
- Mathematics,
- System Scripting
Features:
What can Python do?
- Python can be used on a server to create web applications.
- Python can be used alongside software to create workflows.
- Python can connect to database systems. It can also read and modify files.
- Python can be used to handle big data and perform complex mathematics.
- Python can be used for rapid prototyping, or for production-ready software development.
Why Python?
- Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
- Python has a simple syntax similar to the English language.
- Python has syntax that allows developers to write programs with fewer lines than some other programming languages.
- Python runs on an interpreter system, meaning that code can be executed as soon as it is written. This means that prototyping can be very quick.
- Python can be treated in a procedural way, an object-orientated way or a functional way.
As of March 2018, the Python Package Index, the official repository for third-party Python software, contains over 130,000 packages with a wide range of functionality, including:
- Graphical user interfaces
- Web frameworks
- Multimedia
- Databases
- Networking
- Test frameworks
- Automation
- Web scraping
- Documentation
- System administration
- Scientific computing
- Text processing
- Image processing
PySpark:
Introduction of PySpark:
Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today.
You can interface Spark with Python through "PySpark". This is the Spark Python API exposes the Spark programming model to Python.
Apache Spark comes with an interactive shell for python as it does for Scala. The shell for python is known as "PySpark". To use PySpark you will have to have python installed on your machine.
Evolution of PySpark
Python is a powerful programming language for handling complex data analysis and data munging tasks. It has several in-built libraries and frameworks to do data mining tasks efficiently. However, no programming language alone can handle big data processing efficiently. There is always need for a distributed computing framework like Hadoop, Spark.
Apache Spark supports three most powerful programming languages:
- Scala
- Java
- Python
Apache Spark is written in Scala programming language that compiles the program code into byte code for the JVM for spark big data processing. The open source community has developed a wonderful utility for spark python big data processing known as PySpark. PySpark helps data scientists interface with Resilient Distributed Data-sets in Apache spark and python. Py4J is a popularly library integrated within PySpark that lets python interface dynamically with JVM objects (RDD's).
Some of the key features that make Spark a strong big data engine are:
- Equipped with MLlib library for machine learning algorithms
- Good for Java and Scala developers as Spark imitates Scala's collection API and functional style
- Single library can perform SQL, graph analytics and streaming.
Spark is admired for many reasons by developers and analysts to quickly query, analyze and transform data at scale. In simple words, you can call Spark a competent alternative to Hadoop, with its characteristics, strengths and limitations.
Spark runs in-memory to process data with speed and sophistication than the other complement approaches like Hadoop MapReduce. It can handle several terabytes of data at one time and perform efficient processing.
One of the excellent benefits of using Spark is that it is often used in Hadoop's data storage model, i.e. HDFS and can well integrate with other big data frameworks like HBase, MongoDB, Cassandra. It is one of the best big data choices to learn and apply machine learning algorithms in real-time. It has the ability to run repeated queries on large databases and potentially deal with them.
Spark versus Hadoop MapReduce
Despite having the similar functionality, there is much difference between these two technologies. Let's have a quick look into this comparative analysis:
Criteria | Spark | Hadoop MapReduce |
Processing Location | In-memory | Persists on disk after map and reduce functions |
Ease of use | Easy as based on Scala | Difficult as based on Java |
Speed | Up to 100 times faster than Hadoop MapReduce | Slower |
Latency | Lower | Higher |
Computation | Iterative computation possible | single computation possible |
Task Scheduling | Schedules tasks itself | Requires external schedulers. |
Author:
- A.Yoga Sai Satwik
- Noble John Paul
No comments:
Write comments