* Edit Jan 2021: I recently completed a YouTube video covering topics in this post:
Numpy's ndarrays well-suited for performing math operations on one and two-dimensional arrays of numeric values, but they fall short when it comes to dealing with heterogeneous data sets. To store data from an external source like an excel workbook or database, we need a data structure that can hold different data types. It is also desirable to be able to refer to rows and columns in the data by custom labels rather than numbered indexes.
The pandas library offers data structures designed with this in mind: the series and the DataFrame. Series are 1-dimensional labeled arrays similar to numpy's ndarrays, while DataFrames are labeled 2-dimensional structures, that essentially function as spreadsheet tables.
Pandas Series
Before we get into DataFrames, we'll take a brief detour to explore pandas series. Series are very similar to ndarrays: the main difference between them is that with series, you can provide custom index labels and then operations you perform on series automatically align the data based on the labels.
To create a new series, first load the numpy and pandas libraries (pandas is preinstalled with the Anaconda Python distribution.).
In [1]:
import numpy as np
import pandas as pd
*Note: It is common practice to import pandas with the shorthand "pd".
Define a new series by passing a collection of homogeneous data like ndarray or list, along with a list of associated indexes to pd.Series():
In [2]:
my_series = pd.Series( data = [2,3,5,4], # Data
index= ['a', 'b', 'c', 'd']) # Indexes
my_series
Out[2]:
You can also create a series from a dictionary, in which case the dictionary keys act as the labels and the values act as the data:
In [3]:
my_dict = {"x": 2, "a": 5, "b": 4, "c": 8}
my_series2 = pd.Series(my_dict)
my_series2
Out[3]:
Similar to a dictionary, you can access items in a series by the labels:
In [4]:
my_series["a"]
Out[4]:
Numeric indexing also works:
In [5]:
my_series[0]
Out[5]:
If you take a slice of a series, you get both the values and the labels contained in the slice:
In [6]:
my_series[1:3]
Out[6]:
As mentioned earlier, operations performed on two series align by label:
In [7]:
my_series + my_series
Out[7]:
If you perform an operation with two series that have different labels, the unmatched labels will return a value of NaN (not a number.).
In [8]:
my_series + my_series2
Out[8]:
Other than labeling, series behave much like numpy's ndarrays. A series is even a valid argument to many of the numpy array functions we covered last time:
In [9]:
np.mean(my_series) # numpy array functions generally work on series
Out[9]:
In [10]:
np.dot(my_series, my_series)
Out[10]:
DataFrame Creation and Indexing
A DataFrame is a 2D table with labeled columns that can each hold different types of data. DataFrames are essentially a Python implementation of the types of tables you'd see in an Excel workbook or SQL database. DataFrames are the defacto standard data structure for working with tabular data in Python; we'll be using them a lot throughout the remainder of this guide.
You can create a DataFrame out a variety of data sources like dictionaries, 2D numpy arrays and series using the pd.DataFrame() function. Dictionaries provide an intuitive way to create DataFrames: when passed to pd.DataFrame() a dictionary's keys become column labels and the values become the columns themselves:
In [11]:
# Create a dictionary with some different data types as values
my_dict = {"name" : ["Joe","Bob","Frans"],
"age" : np.array([10,15,20]),
"weight" : (75,123,239),
"height" : pd.Series([4.5, 5, 6.1],
index=["Joe","Bob","Frans"]),
"siblings" : 1,
"gender" : "M"}
df = pd.DataFrame(my_dict) # Convert the dict to DataFrame
df # Show the DataFrame
Out[11]:
Notice that values in the dictionary you use to make a DataFrame can be a variety of sequence objects, including lists, ndarrays, tuples and series. If you pass in singular values like a single number or string, that value is duplicated for every row in the DataFrame (in this case gender is set to "M" for all records and siblings is set to 1.).
Also note that in the DataFrame above, the rows were automatically given indexes that align with the indexes of the series we passed in for the "height" column. If we did not use a series with index labels to create our DataFrame, it would be given numeric row index labels by default:
In [12]:
my_dict2 = {"name" : ["Joe","Bob","Frans"],
"age" : np.array([10,15,20]),
"weight" : (75,123,239),
"height" :[4.5, 5, 6.1],
"siblings" : 1,
"gender" : "M"}
df2 = pd.DataFrame(my_dict2) # Convert the dict to DataFrame
df2 # Show the DataFrame
Out[12]:
You can provide custom row labels when creating a DataFrame by adding the index argument:
In [13]:
df2 = pd.DataFrame(my_dict2,
index = my_dict["name"] )
df2
Out[13]:
A DataFrame behaves like a dictionary of Series objects that each have the same length and indexes. This means we can get, add and delete columns in a DataFrame the same way we would when dealing with a dictionary:
In [14]:
# Get a column by name
df2["weight"]
Out[14]:
Alternatively, you can get a column by label using "dot" notation:
In [15]:
df2.weight
Out[15]:
In [16]:
# Delete a column
del df2['name']
In [17]:
# Add a new column
df2["IQ"] = [130, 105, 115]
df2
Out[17]:
Inserting a single value into a DataFrame causes it to populate across all the rows.
In [18]:
df2["Married"] = False
df2
Out[18]:
When inserting a Series into a DataFrame, rows are matched by index. Unmatched rows will be filled with NaN:
In [19]:
df2["College"] = pd.Series(["Harvard"],
index=["Frans"])
df2
Out[19]:
You can select both rows or columns by label with df.loc[row, column]:
In [20]:
df2.loc["Joe"] # Select row "Joe"
Out[20]:
In [21]:
df2.loc["Joe","IQ"] # Select row "Joe" and column "IQ"
Out[21]:
In [22]:
df2.loc["Joe":"Bob" , "IQ":"College"] # Slice by label
Out[22]:
Select rows or columns by numeric index with df.iloc[row, column]:
In [23]:
df2.iloc[0] # Get row 0
Out[23]:
In [24]:
df2.iloc[0, 5] # Get row 0, column 5
Out[24]:
In [25]:
df2.iloc[0:2, 5:8] # Slice by numeric row and column index
Out[25]:
Select rows or columns based on a mixture of both labels and numeric indexes with df.ix[row, column]:
In [26]:
df2.ix[0] # Get row 0
Out[26]:
In [27]:
df2.ix[0, "IQ"] # Get row 0, column "IQ"
Out[27]:
In [28]:
df2.ix[0:2, ["age", "IQ", "weight"]] # Slice rows and get specific columns
Out[28]:
You can also select rows by passing in a sequence boolean(True/False) values. Rows where the corresponding boolean is True are returned:
In [29]:
boolean_index = [False, True, True]
df2[boolean_index]
Out[29]:
This sort of logical True/False indexing is useful for subsetting data when combined with logical operations. For example, say we wanted to get a subset of our DataFrame with all persons who are over 12 years old. We can do it with boolean indexing:
In [30]:
# Create a boolean sequence with a logical comparison
boolean_index = df2["age"] > 12
# Use the index to get the rows where age > 12
df2[boolean_index]
Out[30]:
You can do this sort of indexing all in one operation without assigning the boolean sequence to a variable.
In [31]:
df2[ df2["age"] > 12 ]
Out[31]:
Exploring DataFrames
Exploring data is an important first step in most data analyses. DataFrames come with a variety of functions to help you explore and summarize the data they contain.
First, let's load in data set to explore: the mtcars data set. The mtcars data set comes with the ggplot library, a port of a popular R plotting library called ggplot2. ggplot does not come with Anaconda, but you can install it by opening a console (cmd.exe) and running: "pip install ggplot" (close Spyder and other programs before installing new libraries.).
Now we can import the mtcars data from ggplot:
In [32]:
from ggplot import mtcars
type(mtcars)
Out[32]:
Notice that mtcars is loaded as a DataFrame. We can check the dimensions and size of a DataFrame with df.shape:
In [33]:
mtcars.shape # Check dimensions
Out[33]:
The output shows that mtars has 32 rows and 12 columns.
We can check the first n rows of the data with the df.head() function:
In [34]:
mtcars.head(6) # Check the first 6 rows
Out[34]:
Similarly, we can check the last few rows with df.tail()
In [35]:
mtcars.tail(6) # Check the lst 6 rows
Out[35]:
With large data sets, head() and tail() are useful to get a sense of what the data looks like without printing hundreds or thousands of rows to the screen. Since each row specifies a different car, lets set the row indexes equal to the car name. You can access and assign new row indexes with df.index:
In [36]:
print(mtcars.index, "\n") # Print original indexes
mtcars.index = mtcars["name"] # Set index to car name
del mtcars["name"] # Delete name column
print(mtcars.index) # Print new indexes
You can access the column labels with df.columns:
In [37]:
mtcars.columns
Out[37]:
Use the df.describe() command to get a quick statistical summary of your data set. The summary includes the mean, median, min, max and a few key percentiles for numeric columns:
In [38]:
mtcars.ix[:,:6].describe() # Summarize the first 6 columns
Out[38]:
Since the columns of a DataFrame are series and series are closely related to numpy's arrays, many ndarray functions work on DataFrames, operating on each column of the DataFrame:
In [39]:
np.mean(mtcars,
axis=0) # Get the mean of each column
Out[39]:
In [40]:
np.sum(mtcars,
axis=0) # Get the sum of each column
Out[40]:
Wrap Up
Pandas DataFrames are the workhorse data structure for data analysis in Python. They provide an intuitive structure that mirrors the sorts of data tables we're using to seeing in spreadsheet programs and indexing functionality that follows the same pattern as other Python data structures. This brief introduction only scratches the surface; DataFrames offer a host of other indexing options and functions, many of which we will see in future lessons.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.