
Test your understanding of Python Data Structure, which we learned in our previous lesson of Day 12 of Learning Python for Data Science, with these targeted practice questions.
Welcome back to Day 12 of Learning Python for Data Science journey! In the last article, we explored:
✅ Local Scope
✅ Enclosing Scope
✅ Global Scope
✅ Built-in Scope
Now, it’s time to solve the practice questions given in the previous article.
Each question is followed by a detailed explanation and output.
Create a DataFrame from a dictionary.
data = {
'Name': ['Alice', 'Bob', 'Charlie'],
'Age': [25, 30, 28],
'City': ['New York', 'London', 'Paris']
}
# Create a DataFrame from the dictionary
df = pd.DataFrame(data)
# Print the DataFrame
print(df)
Display the first 5 rows of a DataFrame.
df.head()
Retrieve a single column as a Series.
df['Name']
Find the number of missing values in each column.
df.isnull().sum()
Sort the DataFrame by a specific column.
df.sort_values(by='Status')
Select rows where a column’s value is greater than 50.
df[df['Patient_ID']>10]
Rename a column.
df.rename(columns = {'Fraud_Flag':'Is_fraud'}, inplace=True)
Drop a column from the DataFrame.
df.drop(columns = 'Paid_Amount', inplace=True)
Fill missing values with the mean.
df.fillna(df['Billed_Amount'].mean(), inplace=True)
Add a new column to the DataFrame
df['New_column'] = 0
Select a subset of rows and columns using loc
and iloc
.
df.loc[1,] # printing 1 row all column using the index label.
df.iloc[1:2,:] # printing 1 row all columns using index position.
Convert a column to a different data type.
df['Provider_ID'].astype(float)
Find the average of all numerical columns.
df.mean(numeric_only=True)
Count the number of unique values in a column.
df['Provider_ID'].nunique()
Use groupby()
to find the mean of a column by category.
df.groupby('Status')['Billed_Amount'].mean()
Merge two DataFrames using merge()
.
df.merge(df2, on = 'Claim_ID')
Find the most frequent value in a column.
df['Provider_ID'].mode()
Compute cumulative sum of a column.
df['Billed_Amount'].sum()
Apply a custom function to transform a column.
df['Paid_Amount'] = df['Paid_Amount'].apply(lambda x : x**2 )
Use pivot_table()
to summarize data.
pd.pivot_table(df, index=['Provider_ID'], values= ['Billed_Amount','Paid_Amount'],columns=['Status'], aggfunc="sum",fill_value=0)
Reshape a DataFrame using melt()
.
df.melt(id_vars=['Claim_ID'], value_vars=['column_to_unpivot'])
Use applymap()
to apply a function to every element in the DataFrame.
df.applymap(lambda x : x + 1 if isinstance(x, (int, float)) else x)
Use map()
to apply a function to a Series.
df['Billed_Amount'].map(lambda x : x*2)
Create a scatter plot using Pandas.
df.plot.scatter(x='Provider_ID', y='Billed_Amount', color='blue', title="Age vs. Claim Amount")
Compute rolling averages using rolling()
.
df['Rolling'] = df['Paid_Amount'].rolling(window=2).sum() # Calculating rolling sum.
Rank values in a column.
df['Rolling'].rank()
Filter rows based on multiple conditions.
df[(df['Provider_ID']>504) & (df['Status'] == 'Approved')]
Use .xs()
to slice data from a multi-index DataFrame.

df_2024 = df.xs('2024', level='Year')
Implement a lambda function inside apply()
for custom transformations.
df['Sales'] = df['Sales'].apply(lambda x : x*2 )
We hope this article was helpful for you and you learned a lot about data science from it. If you have friends or family members who would find it helpful, please share it to them or on social media.
Join our social media for more.
Python for Data Science Python for Data Science Python for Data Science Python for Data Science Python for Data Science Python for Data Science Python for Data Science Python for Data Science
Also Read:
- Practice day 12 of Learning Python for Data Science
- Day 12 of Learning Python for Data Science – Pandas
- Day 10 Of Learning Python for Data Science – NumPy Array In Python
- Day 9 of Learning Python for Data Science – Queries Related To Functions In Python
- Practice day 8 of Learning Python for Data Science
Hi, I am Vishal Jaiswal, I have about a decade of experience of working in MNCs like Genpact, Savista, Ingenious. Currently i am working in EXL as a senior quality analyst. Using my writing skills i want to share the experience i have gained and help as many as i can.