This will make you understand how hard Data Science really is

rabbi khan
5 min readDec 12, 2020

--

Every day, people try to live up to computer science savants and try to break into the field of Data Science.

How hard is it? What does it take? Where do I start?

In this blog I’ll summarize the 3 hardest challenges I faced doing my first Data Science project in this Kaggle Notebook:

  1. You know nothing
  2. Data preparation is critical and time-consuming
  3. Interpret your results

Opinions are my own.

https://www.reddit.com/r/ufc256livestreamst/

https://www.reddit.com/r/ufc256livefights/

https://www.reddit.com/r/ufc256livestreamst/hot/

https://www.reddit.com/r/ufc256livestreamst/new/

https://www.reddit.com/r/ufc256livestreamst/top/

https://www.reddit.com/r/ufc256livestreamst/rising/

https://www.hybrid-analysis.com/sample/0cde3011e79d65dd1626b51d2ad18117a994ba13e6f42b84ccdfc94faa9a8b1a

https://urlscan.io/result/437c73ec-601b-437e-9f74-883c690a4562/

http://www.4mark.net/story/2923227/!%3E-ufc-256-live-stream-reddit-%e2%80%a2-r-ufc256livefights

https://archive.ph/kREKT

https://www.hybrid-analysis.com/sample/f7d03612725ec7c9ed8096d38104dbb3bca96be228bcb9bf6dc8c7bb5bf9e798

https://urlscan.io/result/403850bf-d30f-4d03-b60d-a8f45eba7f9e/

http://www.4mark.net/story/2923228/%3E-!!-ufc-256-live-stream-reddit-%e2%80%a2-r-ufc256livestreamst

https://archive.ph/nZDMY

https://paiza.io/projects/SmdOx64VoZ_Pcnk7wOU9Pw

https://ideone.com/AYSH1W
https://bitcointalk.org/index.php?topic=2877669.msg55813604#msg55813604

Before getting into any details, there is quite an essential part that people seem to gloss over during explanations, or they’re simply part of their small snippet codes. In order for you to use any of the advanced libraries, you are going to have to import them into your workspace. It is best to collect them at the top of your workbook.

My example below:

You know nothing

For my first Data Science project, I created a short blog on starting an Airbnb in Amsterdam. I only used basic data analysis methods and regression models.

Regression models are probably the most basic parts of data science. A typical linear regression function will look like this:

This will return several outputs that you will then use to evaluate your model performance.

There are more than 40 techniques used by data scientists. This means I only used 2.5% of all the models out there.

I’ll be generous. Given my statistics course in University 8 years ago, going into this project I knew about 10% of that regression model already. That means I knew 0.25% of the entire body of knowledge that I know is out there. Then add a very large amount of things I don’t know I don’t know.

My knowledge universe in Data Science looks something like this:

Image by Author

Like that isn’t bad enough, you will find articles like these, exactly describing all of your shortcomings.

This current project took me about 4 weeks and let’s say that’s a pretty average rate of learning new data science models. It will take me about 4 / 0.25% = 800 weeks to learn all the models I have heard of so far and probably add another 5 times that time to learn (probably not even close to) everything in the data science field.

Between 15 and 75 years of learning ahead.

Me after my learning:

Photo by Donald Teel on Unsplash

I’ve worked with data for over 5 years at Google.

Too bad all my previous experience is in SQL and Data Scientists are big fans of Pandas. They’re just such animal lovers.

The challenge here is two-fold: 1) Knowing what to do, 2) Knowing how to do them.

Even with the help described below, the data preparation part takes about 80% of your time or more.

Knowing what to do

The ways to manipulate your data to be ready for ingestion in your models are endless. It goes into the deep underbelly of Statistics and you will need to understand this thoroughly if you want to be a great Data Scientist.

Be prepared to run through these steps many times. I’ll give a couple of examples that have worked for me for each step.

Clean data quality issues

Your data sample size permitting you should probably get rid of any NaN values in your data. They can not be ingested by the regression model. To find the ratio of NaN values per column, use this:

np.sum(df.isnull())/df.shape[0]

To drop all rows with NaN values use this:

df.dropna()

Another data quality problem I ran into was having True and False data as strings instead of a Boolean data type. I solved it using this distutils.util.strtobool function:

Please don’t assume I actually knew how to use Lambda functions before starting this project.

It took me a lot of reading to understand them a little bit. I really liked this article on “What are lambda functions in python and why you should start using them right now”.

Finally, my price data was a string with $ signs and commas. I couldn’t find an elegant way to code the solution, so I botched my way into this:

Cut outliers

Check if there are any outliers in the first place, a boxplot can be very helpful:

sns.boxplot(x=df[‘price’])

Get fancy and use a modified z value (answer from StackOverflow) to cut your outliers:

Or enforce hardcoded conditions to your liking, like so:

df = df[df['price'] < 300]

Normalization & combine variables

Not to be confused with a normal distribution.

Normalization is the process of scaling individual columns the same order of magnitude, e.g. 0 to 1.

Therefore, you have to only consider this move if you want to combine certain variables or you know it affects your model otherwise.

There is a preprocessing library available as part of Sklearn. It can be instrumental in delivering on some of these data preparation aspects.

I found it hard to normalize certain columns and then neatly put them back into my DataFrame. Therefore I’ll share my own full example of combining beds/bedrooms/accommodates variables below:

Create normal distribution for your variables

It’s useful to look at all the data input variables individually and decide how to transform them to better fit a normal distribution.

I’ve used this for loop:

Once you know which you want to transform, consider using a box cox transformation like this:

Note how it is important to also return the list of lambdas that were used by the box cox transformation. You’ll need those to inverse your coefficients when you are interpreting the results.

Standardization

The StandardScaler assumes your data is already normally distributed (see box cox transformation example above) within each feature and will scale them such that the distribution is now centered around 0, with a standard

--

--