{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# TP1 - Some Python exercices to get back into the swing of things\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1. Basic Python (extra libs are prohibited)\n", "---" ] }, { "cell_type": "markdown", "metadata": { "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 1.\n", "\n", "Write a python function that:\n", "1. Reads the file `users.csv` line per line\n", "2. Prints the sentence ` has years old.` for each line\n", "\n", "Here is a demo about how to read a file line per line\n", "\n", " with open(\"path to the file\", \"r\") as finput: # \"r\" means read mode and finput is a variable (its name is free)\n", " for line in f :\n", " print(l)\n", "\n", "To split a string `s` according to a separator `sep` you should use the `split` function (`s.split(sep)`). This function returns a list." ] }, { "cell_type": "markdown", "metadata": { "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 2.\n", "\n", "Write a python function that:\n", "1. Loads the user information from `users.csv` into a dictionary `dUsers`\n", "2. Returns `dUsers`\n", "\n", "\n", "`dUsers` must follow the following format:\n", "\n", " dUsers = {\n", " id: {\n", " \"name\" : name,\n", " \"age\" : age,\n", " \"sex\" : sex,\n", " \"interests\" : [\"interest1\", \"interest2\"]\n", " }\n", " }" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 3.\n", "\n", "Using the data structure you just have created, write two python functions that: \n", "1. Returns the name of the oldest user\n", "2. Returns then number of users who like python" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 4.\n", "\n", "We now consider the links between users and thus manipulate the `links.csv` file. Create a python function that returns an adjacency list `dRel` with the following format.\n", "\n", " dRel = {\n", " id : [list of connected users]\n", " }" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true, "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 5.\n", "\n", "Write two Python functions that:\n", "1. Return the name of the user who has the biggest number of friends\n", "2. Return the name of the user who has the biggest number of friends of the opposite sex" ] }, { "cell_type": "markdown", "metadata": { "solution2": "hidden", "solution2_first": true }, "source": [ "### Exercise 6 (optional)\n", "\n", "The purpose of this last exercise it to write a basic recommender system that implements the following principle: for each user, the system should recommend the majority interest of his/her friend. Obviously, if this majority interest is shared by the user, the second most majority is recommended." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2. An introduction to Python for data scientist\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**There are 5 major steps in any data science / machine learning project :** \n", "\n", "- **Data exploration**\n", "- **Data formatting**\n", "- **Model validation**\n", "- **Prediction**\n", "- **Result submission**\n", "\n", "A brief introduction about how these steps can be handled in Python (>= 3.6) is given below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Essential libraries\n", "_______\n", "**Pandas**\n", "\n", "Pandas is a library written for the Python programming language that allows data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical vectors and time series. \n", "\n", "- The `DataFrame` object to manipulate data easily and efficiently with indexes that can be strings;\n", "- Tools to read and write structured data in memory from and to different formats: CSV files, text files, Microsoft Excel spreadsheet file, SQL database...;\n", "- intelligent data alignment and missing data management (NaN = not a number). label-based data alignment (character strings). sorting according to various totally disordered data criteria;\n", "- Resizing and pivot table;\n", "- Merging and joining of large volumes of data;\n", "- Time series analysis.\n", "\n", "\n", "Documentation link: https://pandas.pydata.org/pandas-docs/stable/\n", "\n", "_______\n", "**Numpy**\n", "\n", "NumPy is an extension of the Python programming language, designed to manipulate multidimensional matrices or tables as well as mathematical functions operating on these tables.\n", "It offers much more efficient types and operations than the standard lib, and has shortcuts for mass processing.\n", "\n", "Documentation link: https://docs.scipy.org/doc/\n", "____\n", "\n", "**Matplotlib**\n", "\n", "Matplotlib is a library of the Python programming language designed to plot and visualize data in graphical form. It can be combined with the NumPy and SciPy python scientific computation libraries.\n", "\n", "Documentation link: https://matplotlib.org/contents.html\n", "\n", "\n", "____\n", "\n", "**Scikit-learn**\n", "\n", "Scikit-learn is a free Python library dedicated to automatic learning. It is developed by many contributors, particularly in the academic world, by French institutes of higher education and research such as Inria and Télécom ParisTech. It includes functions for estimating random forests, logistic regressions, classification algorithms, and support vector machines. It is designed to harmonize with other free Python libraries, including NumPy and SciPy.\n", "\n", "Documentation link: http://scikit-learn.org/stable/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### Let's start coding!\n", "\n", "#### Headers\n", "\n", "Here are the first lines of almost all data scientist Python scripts. It aims at importing the libraries you will use in the following. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "slideshow": { "slide_type": "-" } }, "outputs": [], "source": [ "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns # Seaborn is a Python data visualization library based on matplotlib\n", "import numpy as np\n", "%matplotlib inline " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Import the data\n", "In machine learning competitions, two files are usually given. A training file that is used to learn the machine learning algorithm and a test file that is used to measure the performance of the algorithm.\n", "\n", "**Instructions: read the pandas documentation and find how to read the two csv files. Then, print the first ten lines of the train data frame using the** `head` **function.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Data exploration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's talk about the context, we have to predict house prices.\n", "As you should know, it is a problem of **SUPERVISED** machine learning, because a target variable (`SalePrice`) has to be predicted.\n", "As we have to predict a value it is a regression problem so you will use regression algorithms.\n", "\n", "**Instructions:** \n", "- **Print the column names of the** `training` **data frame using the** `columns` **primitive;**\n", "- **Print the number of lines and columns of the** `training` **and** `test` **data frames using the** `shape` **primitive**\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Analysis of the target variable\n", "**Instructions:**\n", "- **Apply the** `describe` **function on the** `SalePrice` **column**\n", "- **Call the seaborn** `distplot` **function on the** `SalePrice` **column**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Relationship between numerical features and the target variable\n", "\n", "The piece of code below shows how to plot a scatter plot of the two numerical variables `GrLivArea` and `SalePrice` (the target variable).\n", "\n", "**Instructions.** Modify this piece of code to display the relationship between every numerical features and the target variable (you should use a loop).\n", "\n", "**Hint.** To determine wheter a variable (column of the data frame) is numerical, you can have a look to the following [stack overflow post](https://stackoverflow.com/questions/19900202/how-to-determine-whether-a-column-variable-is-numeric-or-not-in-pandas-numpy)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "# scatter plot grlivarea/saleprice\n", "var = 'GrLivArea'\n", "\n", "# A new data frame is created with only the desired columns (the two we would like to display)\n", "price_surface = pd.concat([train['SalePrice'], train[var]], axis=1) \n", "price_surface.plot.scatter(x=var, y='SalePrice', ylim=(0,800000))\n", "plt.ylabel(\"Prix\")\n", "plt.xlabel(\"Surface habitable\")\n", "plt.show()\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Relationship between categorical features and the target variable\n", "\n", "The piece of code below shows how to plot a boxplot of the categorical variables `SaleCondition` w.r.t. the target variable.\n", "\n", "**Instructions.** Modify this piece of code to display the relationship between every categorical features and the target variable (you should use a loop).\n", "\n", "**Hint.** To determine wheter a variable (column of the data frame) is categorical, you can have a look to the following [stack overflow post](https://stackoverflow.com/questions/19900202/how-to-determine-whether-a-column-variable-is-numeric-or-not-in-pandas-numpy)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "var = 'SaleCondition'\n", "pair = pd.concat([train['SalePrice'], train[var]], axis=1)\n", "f, ax = plt.subplots(figsize=(16, 8))\n", "fig = sns.boxplot(x=var, y=\"SalePrice\", data=pair)\n", "fig.axis(ymin=0, ymax=800000);\n", "plt.xticks(rotation=90);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Calculate correlations between variables\n", "The best way to get a complete view of your dataset fairly quickly is to make a heatmap representing the correlations between variables.\n", "The code below shows how to do that very quickly. Have a look to the documentation to determine which method has been used as default to calculate the correlations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#correlation matrix\n", "corrmat = train.corr()\n", "\n", "f, ax = plt.subplots(figsize=(12, 9))\n", "sns.heatmap(corrmat, vmax=1, vmin=-1, square=True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now focus on the 10 features that are the most correlated with the target feature." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "k = 10 #Number of features to consider\n", "\n", "# We keep only the k most (negatively or positively) correlated features\n", "cols = abs(corrmat).nlargest(k, 'SalePrice')['SalePrice'].index\n", "\n", "cm = np.corrcoef(train[cols].values.T)\n", "\n", "sns.set(font_scale=1.25)\n", "hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Data preparation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Most machine learning algorithms do not deal with missing data (NaN). One of the first challenges to adresse is to manage these missing values by replacing them with estimates.\n", "\n", "We first check the ratio of missing values per feature." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#missing data\n", "# the isnull method outputs a matrix of the same format as the train and for each element of this matrix\n", "# sends a booleen: True if the value is a missing value (NaN), False if not\n", "# Then we add the number of null values\n", "total = train.isnull().sum().sort_values(ascending=False)\n", "percent = (train.isnull().sum()/train.isnull().count()).sort_values(ascending=False)\n", "missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])\n", "missing_data.head(20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the first 5 variables contain too many missing values, it is better not to use them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The train and the test are merged in order to do the same formatting for the training and test game. This process is very classic." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.concat([train, test],axis = 'rows', sort=False) # merge the two datasets\n", "data.reset_index(drop= True)\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Instructions. Remove features:**\n", "- With little correlation to the target (SalePrice) (between -0.4 and 0.4)\n", "- With too many missing values (40%)\n", "\n", "Pandas method to succeed in the task: \n", "https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Replace NaN values\n", "\n", "Now you have to replace missing values in order to make sense of them.\n", "\n", "Pandas method to succeed in the task: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html\n", "\n", "For example, you can replace missing values with the most frequent value, or the mean, median...\n", "\n", "**Instructions. Replace the NaN values of the other variables.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Replace NaN values in LotFrontage with the mean\n", "data['LotFrontage'] = data['LotFrontage'].fillna(data['LotFrontage'].mean())\n", "\n", "# Replace NaN values in Alley with the mean\n", "data['Alley'] = data['Alley'].fillna('NOACCESS')\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Converting categorical features into numerical features\n", "\n", "Very few machine learning algorithms do not take the categorical variables as inputs, they need numerical values.\n", "It is thus necessary to convert them into numerical features.\n", "\n", "To remedy this there exists several methods:\n", " - Label encoding** (for example, replace the values[right, left, walkers] with[0, 1, 2])\n", " - One Hot encoding (for example, replace the values[right, left, walkers] with 3 binary variables)\n", " - One advanced method: the target encoding** (work it out)\n", " \n", "Resources : \n", " - Sklearn method for label encoding: http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html\n", " - Pandas method for one hot encoding: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html\n", " - Video on the target encoding: https://fr.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv\n", " \n", "**Instructions.** Apply the one hot encoding on all categorical features." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Data normalization\n", "\n", "Now the DataFrame is ready, it is a good habit to **normalize the data** if you use algorithms of machine learning such as SVM or KNN.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Instructions.** Apply the MinMaxScaler to normalize the data. \n", "\n", "**Resources.** This Stack Overflow entry should be of interest: https://stackoverflow.com/questions/26414913/normalize-columns-of-pandas-data-frame" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Split the data into train and test." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "is_test = data['SalePrice'].isnull() # Masque afin de séparer la base d'entrainement et de test\n", "# car dans le test nous ne connaissons pas la valeur de la variable cible donc ils ont comme valeur NaN\n", "train = data[~is_test] # la tilde est la négation\n", "test = data[is_test].drop('SalePrice', axis = 'columns')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Do some sanity check\n", "Always check your code before training \n", "\n", "`assert` returns an error if the condition is wrong" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "assert len(train) == 1460 # Check the size of the training set\n", "assert len(test) == 1459 # Check the size of the test set\n", "assert train.isnull().sum().sum() == 0 # Check if there still exists NaN values\n", "assert test.isnull().sum().sum() == 0 # Check if there still exists NaN values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Model validation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# X are the training data and Y the prices to predict\n", "X_train = train.drop(['SalePrice','Id'], axis = 'columns')\n", "Y_train = train['SalePrice']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The problem is measured using the RMSE, which is the average square deviation between the predicted value and the true value.\n", "$$\\sqrt{\\frac{1}{n} \\sum^n_{i=1}(\\overline{y_i} - y_i)^2}$$\n", "The goal is to minimize this evaluation metric.\n", "\n", "After the data formatting, the evaluation of the model is the most important. It is **necessary** to evaluate your model. \n", "\n", "Validation of the model provides us with information on its performance, if new additions or modifications to the data have enable the model to better predict. Also it informs us if there is overfit (the worst enemy in machine learning)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ " def rmse(predictions,targets):\n", " \"\"\"Implementation of RMSE\n", " \n", " Arguments:\n", " predictions {np array} -- Predicted value\n", " targets {np array} -- True value\n", " \n", " Returns:\n", " float -- RMSE score\n", " \"\"\"\n", " return np.sqrt(np.mean((predictions-targets)**2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The validation method we will use is **cross-validation**.\n", "\n", "**Cross-validation** is, in machine learning, **a method of estimating the reliability of a model** based on a sampling technique.\n", "\n", "Suppose you have a statistical model with one or more unknown parameters, and a set of learning data on which you can train the model. The learning process optimizes the model parameters to match the data as closely as possible. If an independent validation sample is then taken from the same training population, it will generally turn out that the model does not respond as well to validation as it did during training: sometimes it is called overlearning. Cross-validation is a way to predict the effectiveness of a model on a hypothetical validation set when an independent and explicit validation set is not available.\n", "\n", "\n", "\n", "**k-fold cross-validation**: the original sample is divided into k samples, then one of k samples is selected as the validation set and the other k-1 samples will constitute the learning set. The performance score is calculated as in the first method, then the operation is repeated by selecting another validation sample from among the k-1 samples that have not yet been used for model validation. The operation is repeated k times so that in the end each sub-sample was used exactly once as a validation set. The mean of the k root mean square errors is finally calculated to estimate the prediction error." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Déclaration of the model**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression\n", "model = LinearRegression() # Try to use some others!!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Cross-validation**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import KFold\n", "\n", "\n", "# Split the dataset into 5 folds using a predefined seed (for reproducibility purpose)\n", "# http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html\n", "CV = KFold(n_splits = 5,random_state = 42) \n", "\n", "\n", "# List to save the model values for each fold\n", "fit_score = [] \n", "val_score = []\n", "\n", "verbose = False # Set it to True if you want aditionnal infos to be displayed\n", "\n", "# enumerate is a predifined keyword in Python: https://docs.python.org/3/library/functions.html#enumerate\n", "for i, (fit_index,val_index) in enumerate(CV.split(X_train,Y_train)):\n", " \n", " X_fit = X_train.iloc[fit_index]\n", " Y_fit = Y_train.iloc[fit_index]\n", " X_val = X_train.iloc[val_index]\n", " Y_val = Y_train.iloc[val_index]\n", " \n", " \n", " model.fit(X_fit,Y_fit)\n", " \n", " pred_fit = model.predict(X_fit)\n", " pred_val = model.predict(X_val)\n", " \n", " if verbose :\n", " print(f'Rmse fit for fold {i+1} : {rmse(pred_fit,Y_fit):.3f}')\n", " print(f'Rmse val for fold {i+1} : {rmse(pred_val,Y_val):.3f}')\n", " \n", " fit_score.append(rmse(pred_fit,Y_fit))\n", " val_score.append(rmse(pred_val,Y_val))\n", "\n", "fit_score = np.array(fit_score)\n", "val_score = np.array(val_score)\n", "\n", "print(f'RMSE score for fit :{np.mean(fit_score):.3f} ± {np.std(fit_score):.3f}')\n", "print(f'RMSE score for val :{np.mean(val_score):.3f} ± {np.std(val_score):.3f}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Instructions.** Some areas for improvment: \n", "- Try other models on scikit learn to start with such as random forest and svm\n", "- Try to train a model with lightgbm regressor with early stopping (out of the scope of this course)\n", "- Try to make out-of-fold bagging that will improve your final prediction (out of the scope of this course)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prediction and generation of the output fimle" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At this point, you trained k models and have an idea on how effective is your solution (the features used, the algorithms and its parameters).\n", "We are now training our model on all train data because previously, we only used $\\frac{4}{5}$ of our data in cross-validation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.fit(X_train,Y_train)\n", "pred = model.predict(test.drop(['Id'],axis = 'columns'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to participate to a machine learning competition (e.g., Kaggle), you need to submit to prediction and thus to first write it in a file. You will find below some piece of code to achieve this goal." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "submission = pd.DataFrame()\n", "submission['Id'] = np.array(test['Id'])\n", "submission['SalePrice'] = pred\n", "submission.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filename = f'submission_{np.mean(val_score):.3f}_{np.std(val_score):.3f}'\n", "submission.to_csv(f'submission/{filename}',index =False)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "toc": { "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "toc_cell": false, "toc_position": {}, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }