Week 7 Thursday#
Plan:#
Overfitting
Warm-up: Most common values#
import seaborn as sns
import altair as alt
import pandas as pd
df_pre = sns.load_dataset("taxis")
Here is a reminder of how the dataset looks.
df_pre
pickup | dropoff | passengers | distance | fare | tip | tolls | total | color | payment | pickup_zone | dropoff_zone | pickup_borough | dropoff_borough | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2019-03-23 20:21:09 | 2019-03-23 20:27:24 | 1 | 1.60 | 7.0 | 2.15 | 0.0 | 12.95 | yellow | credit card | Lenox Hill West | UN/Turtle Bay South | Manhattan | Manhattan |
1 | 2019-03-04 16:11:55 | 2019-03-04 16:19:00 | 1 | 0.79 | 5.0 | 0.00 | 0.0 | 9.30 | yellow | cash | Upper West Side South | Upper West Side South | Manhattan | Manhattan |
2 | 2019-03-27 17:53:01 | 2019-03-27 18:00:25 | 1 | 1.37 | 7.5 | 2.36 | 0.0 | 14.16 | yellow | credit card | Alphabet City | West Village | Manhattan | Manhattan |
3 | 2019-03-10 01:23:59 | 2019-03-10 01:49:51 | 1 | 7.70 | 27.0 | 6.15 | 0.0 | 36.95 | yellow | credit card | Hudson Sq | Yorkville West | Manhattan | Manhattan |
4 | 2019-03-30 13:27:42 | 2019-03-30 13:37:14 | 3 | 2.16 | 9.0 | 1.10 | 0.0 | 13.40 | yellow | credit card | Midtown East | Yorkville West | Manhattan | Manhattan |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
6428 | 2019-03-31 09:51:53 | 2019-03-31 09:55:27 | 1 | 0.75 | 4.5 | 1.06 | 0.0 | 6.36 | green | credit card | East Harlem North | Central Harlem North | Manhattan | Manhattan |
6429 | 2019-03-31 17:38:00 | 2019-03-31 18:34:23 | 1 | 18.74 | 58.0 | 0.00 | 0.0 | 58.80 | green | credit card | Jamaica | East Concourse/Concourse Village | Queens | Bronx |
6430 | 2019-03-23 22:55:18 | 2019-03-23 23:14:25 | 1 | 4.14 | 16.0 | 0.00 | 0.0 | 17.30 | green | cash | Crown Heights North | Bushwick North | Brooklyn | Brooklyn |
6431 | 2019-03-04 10:09:25 | 2019-03-04 10:14:29 | 1 | 1.12 | 6.0 | 0.00 | 0.0 | 6.80 | green | credit card | East New York | East Flatbush/Remsen Village | Brooklyn | Brooklyn |
6432 | 2019-03-13 19:31:22 | 2019-03-13 19:48:02 | 1 | 3.85 | 15.0 | 3.36 | 0.0 | 20.16 | green | credit card | Boerum Hill | Windsor Terrace | Brooklyn | Brooklyn |
6433 rows Ă— 14 columns
I don’t think it’s realistic to perform logistic regression directly on the “pickup_zone” column, because there are so many values. Here are those values.
df_pre['pickup_zone'].unique()
array(['Lenox Hill West', 'Upper West Side South', 'Alphabet City',
'Hudson Sq', 'Midtown East', 'Times Sq/Theatre District',
'Battery Park City', 'Murray Hill', 'East Harlem South',
'Lincoln Square East', 'LaGuardia Airport', 'Lincoln Square West',
'Financial District North', 'Upper West Side North',
'East Chelsea', 'Midtown Center', 'Gramercy',
'Penn Station/Madison Sq West', 'Sutton Place/Turtle Bay North',
'West Chelsea/Hudson Yards', 'Clinton East', 'Clinton West',
'UN/Turtle Bay South', 'Midtown South', 'Midtown North',
'Garment District', 'Lenox Hill East', 'Flatiron',
'TriBeCa/Civic Center', nan, 'Upper East Side North',
'West Village', 'Greenwich Village South', 'JFK Airport',
'East Village', 'Union Sq', 'Yorkville West', 'Central Park',
'Meatpacking/West Village West', 'Kips Bay', 'Morningside Heights',
'Astoria', 'East Tremont', 'Upper East Side South',
'Financial District South', 'Bloomingdale', 'Queensboro Hill',
'SoHo', 'Brooklyn Heights', 'Yorkville East', 'Manhattan Valley',
'DUMBO/Vinegar Hill', 'Little Italy/NoLiTa',
'Mott Haven/Port Morris', 'Greenwich Village North',
'Stuyvesant Heights', 'Lower East Side', 'East Harlem North',
'Chinatown', 'Fort Greene', 'Steinway', 'Central Harlem',
'Crown Heights North', 'Seaport', 'Two Bridges/Seward Park',
'Boerum Hill', 'Williamsburg (South Side)', 'Rosedale', 'Flushing',
'Old Astoria', 'Soundview/Castle Hill',
'Stuy Town/Peter Cooper Village', 'World Trade Center',
'Sunnyside', 'Washington Heights South', 'Prospect Heights',
'East New York', 'Hamilton Heights', 'Cobble Hill',
'Long Island City/Queens Plaza', 'Central Harlem North',
'Manhattanville', 'East Flatbush/Farragut', 'Elmhurst',
'East Concourse/Concourse Village', 'Park Slope', 'Greenpoint',
'Williamsburg (North Side)', 'Long Island City/Hunters Point',
'South Ozone Park', 'Ridgewood', 'Downtown Brooklyn/MetroTech',
'Queensbridge/Ravenswood', 'Williamsbridge/Olinville', 'Bedford',
'Gowanus', 'Jackson Heights', 'South Jamaica', 'Bushwick North',
'West Concourse', 'Queens Village', 'Windsor Terrace', 'Flatlands',
'Van Cortlandt Village', 'Woodside', 'East Williamsburg',
'Fordham South', 'East Elmhurst', 'Kew Gardens',
'Flushing Meadows-Corona Park', 'Marine Park/Mill Basin',
'Carroll Gardens', 'Canarsie', 'East Flatbush/Remsen Village',
'Jamaica', 'Marble Hill', 'Bushwick South', 'Erasmus',
'Claremont/Bathgate', 'Pelham Bay', 'Soundview/Bruckner',
'South Williamsburg', 'Battery Park', 'Forest Hills', 'Maspeth',
'Bronx Park', 'Starrett City', 'Brighton Beach', 'Brownsville',
'Highbridge Park', 'Bensonhurst East', 'Mount Hope',
'Prospect-Lefferts Gardens', 'Bayside', 'Douglaston', 'Midwood',
'North Corona', 'Homecrest', 'Westchester Village/Unionport',
'University Heights/Morris Heights', 'Inwood',
'Washington Heights North', 'Flatbush/Ditmas Park', 'Rego Park',
'Riverdale/North Riverdale/Fieldston', 'Jamaica Estates',
'Borough Park', 'Sunset Park West', 'Belmont', 'Auburndale',
'Schuylerville/Edgewater Park', 'Co-Op City',
'Crown Heights South', 'Spuyten Duyvil/Kingsbridge',
'Morrisania/Melrose', 'Hollis', 'Parkchester', 'Coney Island',
'East Flushing', 'Richmond Hill', 'Bedford Park', 'Highbridge',
'Clinton Hill', 'Sheepshead Bay', 'Madison', 'Dyker Heights',
'Cambria Heights', 'Pelham Parkway', 'Hunts Point',
'Melrose South', 'Springfield Gardens North', 'Bay Ridge',
'Elmhurst/Maspeth', 'Crotona Park East', 'Bronxdale',
'Briarwood/Jamaica Hills', 'Van Nest/Morris Park',
'Murray Hill-Queens', 'Kingsbridge Heights', 'Whitestone',
'Saint Albans', 'Allerton/Pelham Gardens', 'Howard Beach',
'Norwood', 'Bensonhurst West', 'Columbia Street', 'Middle Village',
'Prospect Park', 'Ozone Park', 'Gravesend', 'Glendale',
'Kew Gardens Hills', 'Woodlawn/Wakefield',
'West Farms/Bronx River', 'Hillcrest/Pomonok'], dtype=object)
len(df_pre["pickup_zone"].unique())
195
In the taxis dataset from Seaborn, keep only the rows with a pickup zone that occurs at least 200 times in the dataset. Store the resulting DataFrame as
df
.
This is an example of working with a pandas Series. Most of our examples of pandas Series are columns in a DataFrame, but in this case, the value_counts
method returns a pandas Series also.
vc = df_pre["pickup_zone"].value_counts()
vc
Midtown Center 230
Upper East Side South 211
Penn Station/Madison Sq West 210
Clinton East 208
Midtown East 198
...
Pelham Bay 1
Hollis 1
Battery Park 1
Columbia Street 1
Howard Beach 1
Name: pickup_zone, Length: 194, dtype: int64
There are lots of nice properties of this particular Series. The terms are written in order from most frequent to least frequent (in the index). The values correspond to how often the terms occur.
Here is a Boolean Series indicating which occur at least 200
times.
vc >= 200
Midtown Center True
Upper East Side South True
Penn Station/Madison Sq West True
Clinton East True
Midtown East False
...
Pelham Bay False
Hollis False
Battery Park False
Columbia Street False
Howard Beach False
Name: pickup_zone, Length: 194, dtype: bool
Here we perform Boolean indexing to keep only those entries where the value is at least 200
.
Since we only care about the index, we can use the following. Read this as, “Keep only those Series entries for which the value is greater than 200
, and then extract the index
from that Series.”
vc[vc>=200].index
Index(['Midtown Center', 'Upper East Side South',
'Penn Station/Madison Sq West', 'Clinton East'],
dtype='object')
Here is an alternative approach. Read this as, “Keep only those index terms for which the value is greater than 200
.”
Let’s store the above pandas Index (the ones corresponding to at least 200
rows) with the variable name pz200
(short for “pickup zone 200”).
pz200 = vc.index[vc>=200]
pz200
Index(['Midtown Center', 'Upper East Side South',
'Penn Station/Madison Sq West', 'Clinton East'],
dtype='object')
df_pre["pickup_zone"].isin(pz200)
0 False
1 False
2 False
3 False
4 False
...
6428 False
6429 False
6430 False
6431 False
6432 False
Name: pickup_zone, Length: 6433, dtype: bool
We can turn any list-like object (like the above pz200
, which is a pandas Index, not a list) into a Boolean Series by passing it as an argument to the isin
method. For example, the following is showing which of the pickup zones are in our pz200
variable.
df_pre["pickup_zone"].isin(pz200)
We can use Boolean indexing with this isin
method, to get only the rows for which the pickup zone is one of the entries that occurs at least 200
times.
df = df_pre[df_pre["pickup_zone"].isin(pz200)]
df
pickup | dropoff | passengers | distance | fare | tip | tolls | total | color | payment | pickup_zone | dropoff_zone | pickup_borough | dropoff_borough | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17 | 2019-03-23 20:50:49 | 2019-03-23 21:02:07 | 1 | 2.60 | 10.5 | 2.00 | 0.0 | 16.30 | yellow | credit card | Midtown Center | East Harlem South | Manhattan | Manhattan |
20 | 2019-03-21 03:37:34 | 2019-03-21 03:44:13 | 1 | 1.07 | 6.5 | 1.54 | 0.0 | 11.84 | yellow | credit card | Penn Station/Madison Sq West | Kips Bay | Manhattan | Manhattan |
21 | 2019-03-25 23:05:54 | 2019-03-25 23:11:13 | 1 | 0.80 | 5.5 | 2.30 | 0.0 | 11.60 | yellow | credit card | Penn Station/Madison Sq West | Murray Hill | Manhattan | Manhattan |
27 | 2019-03-16 20:30:36 | 2019-03-16 20:46:22 | 1 | 2.60 | 12.5 | 3.26 | 0.0 | 19.56 | yellow | credit card | Clinton East | Lenox Hill West | Manhattan | Manhattan |
31 | 2019-03-01 02:55:55 | 2019-03-01 02:57:59 | 3 | 0.74 | 4.0 | 0.00 | 0.0 | 7.80 | yellow | cash | Clinton East | West Chelsea/Hudson Yards | Manhattan | Manhattan |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
5403 | 2019-03-15 08:42:56 | 2019-03-15 08:47:27 | 1 | 0.64 | 5.0 | 1.00 | 0.0 | 9.30 | yellow | credit card | Upper East Side South | Lenox Hill East | Manhattan | Manhattan |
5404 | 2019-03-20 21:03:10 | 2019-03-20 21:11:12 | 1 | 1.79 | 8.5 | 0.00 | 0.0 | 12.30 | yellow | cash | Midtown Center | Upper East Side North | Manhattan | Manhattan |
5423 | 2019-03-22 13:47:56 | 2019-03-22 14:01:32 | 1 | 1.00 | 9.5 | 2.00 | 0.0 | 14.80 | yellow | credit card | Midtown Center | Union Sq | Manhattan | Manhattan |
5444 | 2019-03-05 21:39:03 | 2019-03-05 21:49:12 | 5 | 1.31 | 8.0 | 0.00 | 0.0 | 11.80 | yellow | cash | Clinton East | Midtown East | Manhattan | Manhattan |
5445 | 2019-03-13 10:57:06 | 2019-03-13 11:03:29 | 1 | 0.83 | 6.0 | 1.86 | 0.0 | 11.16 | yellow | credit card | Upper East Side South | Upper East Side North | Manhattan | Manhattan |
859 rows Ă— 14 columns
How many rows are in
df
?
len(df)
859
df.shape[0]
859
Draw a scatter plot in Altair encoding the “distance” in the x-channel, the “total” in the y-channel, and the “pickup_zone” in the color.
Now there are only four pickup zones. It does not seem like a good candidate for predictions, because there are no clear patterns in this data.
alt.Chart(df).mark_circle().encode(
x = "distance",
y = "total",
color = "pickup_zone:N"
)
Do you expect to be able to predict the pickup zone from this data?
No, there’s no clear pattern to the pickup zones in terms of distance and total fare.
Overfitting#
One of the most important concepts in Machine Learning is the concept of overfitting. The basic idea is that if we have a very flexible model (for example, a model with many parameters), it may perform very well on the training data, but it may not generalize well to new data.
When the model is too flexible, it may simply be memorizing random noise within the data, rather than learning the true underlying structure.
Import a Decision Tree model from
sklearn.tree
. We will be using “distance” and “total” as our input features, and using “pickup_zone” as our target. So should this be aDecisionTreeClassifier
or aDecisionTreeRegressor
?
Our inputs are numeric and our outputs are discrete classes. That means this is a classification task: all that matters is the outputs, not the inputs. So we use a DecisionTreeClassifier
.
We will discuss what a DecisionTreeClassifier
actually does next week. For now, just know that it is another model for classification, like logistic regression.
from sklearn.tree import DecisionTreeClassifier
Usually we will pass at least one keyword argument to the following constructor (putting some constraint on clf
), but here we just use the default values. Because we are not placing any constraints on the complexity of the decision tree, we are very much at risk of overfitting.
clf = DecisionTreeClassifier()
Fit the model to the data.
cols = ['distance', 'total']
Here we fit the model to the data. Even though we don’t know what a decision tree is, we can do this fitting easily, because it is the same syntax as for other models in scikit-learn.
clf.fit(df[cols], df["pickup_zone"])
DecisionTreeClassifier()In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
DecisionTreeClassifier()
What is the model’s accuracy? Use the
score
method.
Wow, we’ve gotten 93%. Is that good? No, it is almost certainly bad! Random guessing would give us a 25% accuracy (since there are four classes), and this is so much higher. Do you really think there is any model that takes as input the fare and the distance and as output returns the pickup zone (from among these four options) with 93% accuracy? Would you even expect a human expert to be able to do that?
clf.score(df[cols], df["pickup_zone"])
0.9324796274738067
Detecting overfitting using a test set#
Import the
train_test_split
function fromsklearn.model_selection
. Divide the data into a training set and a test set. Use 20% of the rows for the test set.
The input (df[cols]
) gets divided into two components, a training set and a test set, named X_train
and X_test
. Similarly for the output (df["pickup_zone"]
), it gets divided into y_train
and y_test
.
We specify that the test set should be 20% of the data, by using test_size=0.2
. If we had used test_size=80
, for example, then the test set would have 80
rows.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df[cols], df["pickup_zone"], test_size=0.2, random_state=10)
Fit a decision tree model using the training set.
clf.fit(X_train, y_train)
DecisionTreeClassifier()In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
DecisionTreeClassifier()
The whole point is to keep the test set secret during the fitting process.
What is the accuracy of the model on the training set?
Here we’ve achieved even a slightly better performance than above.
clf.score(X_train, y_train)
0.9461426491994177
What is the accuracy of the model on the test set?
The following result is the most important part of this notebook. Notice how we have crashed from 94% accuracy to only slightly better accuracy than random guessing. This is a very strong sign that our model has been overfitting the data. It has learned the data very well, but there is no evidence that our model will perform well on new, unseen data.
clf.score(X_test, y_test)
0.29069767441860467
How does this result suggest overfitting?
The much better score on the training set than the test set is a strong sign of overfitting.
Created in Deepnote