Python for data analysis
Getting Started
A friend recently asked for advice on getting started with doing data analysis in Python and I thought it might benefit others if published here. This is for someone new to Python that's looking for the easiest path from zero to one.
- Download the Python 3.X version of the Anaconda distribution for your operating system. You will avoid a lot of install-related headaches by choosing this pre-bundled distribution. It comes with most of the important data analysis packages pre-installed.
- Once you have it installed, test to make sure that the default python interpreter is the one you’ve just installed. This is important because your system may already have a version of Python installed, but it won’t have all the good stuff in the Anaconda bundle, so you need to make sure the new one is the default. On Mac/Linux this might mean typing
which python
into the terminal. Or you can just run the Python interpreter and make sure the version matches what you downloaded. If all went well, it should have been done by the install. If not, you’ll need to stop here and fix it. - Issue the
jupyter notebook
command in your shell. This should open a browser window. If not, open a browser and navigate tohttp://localhost:8888
. Once there, create a new Python notebook. - Go to the kernels section of kaggle.com and filter to Python kernels. These are mostly jupyter notebooks of other people doing analysis or building models on data sets that are freely available on Kaggle’s website. Look for titles with things like EDA (Exploratory Data Analysis), as opposed to those building predictive models. Find one that’s interesting and start recreating it in your notebook.
Note: You’ll find that when you try to recreate some of these analyses that you get import errors. This is likely because they’ve installed packages that are not bundled in the Anaconda distribution. You’ll eventually need to learn how to interact with the conda package manager and this will be one of many rabbit holes you’ll eventually go down. Usually it’s as easy as
conda install <package_name>
but you’ll need to find the right package name and sometimes you’ll need to specify other details. And other times you’ll need to usepip install <other_package_name>
, but you’ll learn all that later.
High Level Library Summary
Here’s a quick summary of the important libraries you’ll interact with frequently.
- NumPy: has a lot of the core functionality for scientific computing. Under the hood is calling C-compiled code, so is much faster than the same functions written in Python. Not the most user-friendly.
- SciPy: similar to NumPy but has more means for sampling from distributions, calculating test statistics…etc.
- MatPlotLib: The main plotting framework. A necessary evil.
- Seaborn: import it after MatPlotLib and it will make your plots a lot prettier by default. Also has its own functionality, but I find the coolest stuff runs too slow.
- Pandas: mostly a thin wrapper around NumPy/SciPy to make more user friendly. Ideal for interacting with tables of data, which they call a DataFrame. Also has wrappers around plotting functionality to enable quick plotting while avoiding complications of MPL. I use Pandas more than anything for manipulating data.
- Scikit-learn: Has a lot of supervised and unsupervised machine learning algorithms. Also has many metrics for doing model selection and a nice preprocessing library for doing things like Principal Component Analysis or encoding categorical variables.
Quick Tips
- When in a jupyter notebook, put a question mark in front of any object before running the cell and it will open up the documentation for it. This is really handy when you’ve forgotten the details of what the function you’re trying to call is expecting you to pass. e.g.
?my_dataframe.apply
will explain theapply
method of thepandas.DataFrame
object, represented here bymy_dataframe
. - You will likely always need to refer to the documentation for whatever library you’re using, so just keep it open in your browser. There’s just too many optional arguments and nuances.
- When it comes to the inevitable task of troubleshooting, stackoverflow probably has the answer.
- Accept the fact that you’ll be doing things you don’t fully understand for awhile or you’ll get bogged down by details that aren’t that important. Some day you’ll probably need to understand virtual environments and it’s really not that hard, but there are many detours like that that add unnecessary pain for someone getting started.
- Read other people’s code. It’s the best way to learn conventions and best practices. That’s where the Kaggle kernels really help. GitHub also supports the display of jupyter notebooks in the browser, so there are tons of examples on the internet.