This list of data analyst interview questions is based on the responsibilities handled by data analysts.However, the questions in a data analytic job interview may vary based on the nature of work expected by an organization. If you are planning to appear for a data analyst job interview, these interview questions for data analysts will help you land a top gig as a data analyst at one of the top tech companies. 
Robert Half Technology survey of 1400 CIO’s revealed that 53% of the companies were actively collecting data but they lacked sufficient skilled data analysts to access the data and extract insights. Data analysts are in great demand and sorely needed with many novel data analyst job positions emerging in business domains like healthcare, fintech, transportation, retail, etc. The job role of a data analyst involves collecting data and analyzing it using various statistical techniques. The end goal of a data analyst is to provide organizations with reports that can contribute to faster and better decision making process. As data analysts salaries continue to rise with the entrylevel data analyst earning an average of $50,000$75,000 and experienced data analyst salary ranging from $65,000$110,000, many IT professionals are embarking on a career as a Data analyst.
If you are aspiring to be a data analyst then the core competencies that you should be familiar with are distributed computing frameworks like Hadoop and Spark, knowledge of programming languages like Python, R , SAS, data munging, data visualization, math , statistics , and machine learning. When being interviewed for a data analyst job role, candidates want to do everything that can let the interviewer see their communication skills, analytical skills and problem solving abilities. These data analyst interview questions and answers will help newly minted data analyst job candidates prepare for analyst –specific interview questions.
Table of Contents
1) What is the difference between Data Mining and Data Analysis?
Data Mining 
Data Analysis 
Data mining usually does not require any hypothesis.  Data analysis begins with a question or an assumption. 
Data Mining depends on clean and welldocumented data.  Data analysis involves data cleaning. 
Results of data mining are not always easy to interpret.  Data analysts interpret the results and convey the to the stakeholders. 
Data mining algorithms automatically develop equations.  Data analysts have to develop their own equations based on the hypothesis. 
Get FREE Access to Data Analytics Example Codes for Data Cleaning, Data Munging, and Data Visualization
2) Explain the typical data analysis process.
Data analysis deals with collecting, inspecting, cleansing, transforming and modelling data to glean valuable insights and support better decision making in an organization. The various steps involved in the data analysis process include –
Having identified the business problem, a data analyst has to go through the data provided by the client to analyse the root cause of the problem.
Data Preparation
This is the most crucial step of the data analysis process wherein any data anomalies (like missing values or detecting outliers) with the data have to be modelled in the right direction.
Data Modelling
The modelling step begins once the data has been prepared. Modelling is an iterative process wherein the model is run repeatedly for improvements. Data modelling ensures that the best possible result is found for a given business problem.
Validation
In this step, the model provided by the client and the model developed by the data analyst are validated against each other to find out if the developed model will meet the business requirements.
Implementation of the Model and Tracking
This is the final step of the data analysis process wherein the model is implemented in production and is tested for accuracy and efficiency.
3) What is the difference between Data Mining and Data Profiling?
Data Profiling, also referred to as Data Archeology is the process of assessing the data values in a given dataset for uniqueness, consistency and logic. Data profiling cannot identify any incorrect or inaccurate data but can detect only business rules violations or anomalies. The main purpose of data profiling is to find out if the existing data can be used for various other purposes.
Data Mining refers to the analysis of datasets to find relationships that have not been discovered earlier. It focusses on sequenced discoveries or identifying dependencies, bulk analysis, finding various types of attributes, etc.
4) How often should you retrain a data model?
A good data analyst is the one who understands how changing business dynamics will affect the efficiency of a predictive model. You must be a valuable consultant who can use analytical skills and business acumen to find the root cause of business problems.
The best way to answer this question would be to say that you would work with the client to define a time period in advance. However, I would refresh or retrain a model when the company enters a new market, consummate an acquisition or is facing emerging competition. As a data analyst, I would retrain the model as quick as possible to adjust with the changing behaviour of customers or change in market conditions.
5) What is data cleansing? Mention few best practices that you have followed while data cleansing. (get solved code examples)
From a given dataset for analysis, it is extremely important to sort the information required for data analysis. Data cleaning is a crucial step in the analysis process wherein data is inspected to find any anomalies, remove repetitive data, eliminate any incorrect information, etc. Data cleansing does not involve deleting any existing information from the database, it just enhances the quality of data so that it can be used for analysis.
Here are some solved data cleansing code snippets that you can use in your interviews or projects. Click on these links below to download the python code for these problems.
How to Flatten a Matrix?
How to Calculate Determinant of a Matrix or ndArray?
How to calculate Diagonal of a Matrix?
How to Calculate Trace of a Matrix?
How to invert a matrix or nArray in Python?
How to convert a dictionary to a matrix or nArray in Python?
How to reshape a Numpy array in Python?
How to select elements from Numpy array in Python?
How to create a sparse Matrix in Python?
How to Create a Vector or Matrix in Python?
How to run a basic RNN model using Pytorch?
How to save and reload a deep learning model in Pytorch?
How to use auto encoder for unsupervised learning models?
Some of the best practices for data cleansing include –
Get More Practice, More Big Data and Analytics Projects, and More guidance.FastTrack Your Career Transition with ProjectPro
6) How will you handle the QA process when developing a predictive model to forecast customer churn?
Data analysts require inputs from the business owners and a collaborative environment to operationalize analytics. To create and deploy predictive models in production there should be an effective, efficient and repeatable process. Without taking feedback from the business owner, the model will just be a oneanddone model.
The best way to answer this question would be to say that you would first partition the data into 3 different sets Training, Testing and Validation. You would then show the results of the validation set to the business owner by eliminating biases from the first 2 sets. The input from the business owner or the client will give you an idea on whether you model predicts customer churn with accuracy and provides desired results.
7) Mention some common problems that data analysts encounter during analysis.
8) What are the important steps in data validation process? (get solved code examples for handson experience)
Data Validation is performed in 2 different steps
Data Screening – In this step various algorithms are used to screen the entire data to find any erroneous or questionable values. Such values need to be examined and should be handled.
Data Verification In this step each suspect value is evaluated on case by case basis and a decision is to be made if the values have to be accepted as valid or if the values have to be rejected as invalid or if they have to be replaced with some redundant values.
9) How will you create a classification to identify key customer trends in unstructured data?
A model does not hold any value if it cannot produce actionable results, an experienced data analyst will have a varying strategy based on the type of data being analysed. For example, if a customer complain was retweeted then should that data be included or not. Also, any sensitive data of the customer needs to be protected, so it is also advisable to consult with the stakeholder to ensure that you are following all the compliance regulations of the organization and disclosure laws, if any.
You can answer this question by stating that you would first consult with the stakeholder of the business to understand the objective of classifying this data. Then, you would use an iterative process by pulling new data samples and modifying the model accordingly and evaluating it for accuracy. You can mention that you would follow a basic process of mapping the data, creating an algorithm, mining the data, visualizing it and so on. However, you would accomplish this in multiple segments by considering the feedback from stakeholders to ensure that you develop an enriching model that can produce actionable results.
10) What is the criteria to say whether a developed data model is good or not?
11) According to you what are the qualities/skills that a data analyst must posses to be successful at this position.
Problem Solving and Analytical thinking are the two important skills to be successful as a data analyst. One needs to skilled ar formatting data so that the gleaned information is available in a easytoread manner. Not to forget technical proficiency is of significant importance. You can also talk about other skills that the interviewer expects in an ideal candidate for the job position based on the given job description.
12) You are assigned a new data anlytics project. How will you begin with and what are the steps you will follow? (relevant solved code examples  click here)
The purpose of asking this question is that the interviewer wants to understand how you approach a given data problem and what is the though process you follow to ensure that you are organized. You can start answering this question by saying that you will start with finding the objective of the given problem and defining it so that there is solid direction on what need to be done. The next step would be to do data exploration and familiarise myself with the entire dataset which is very important when working with a new dataset.The next step would be to prepare the data for modelling which would including finding outliers, handling missing values and validating the data. Having validated the data, I will start data modelling untill I discover any meaningfuk insights. After this the final step would be to implement the model and track the output results.
This is the generic data analysis process that we have explained in this answer, however, the answer to your question might slightly change based on the kind of data problem and the tools available at hand.
13) What do you know about interquartile range as data analyst?
A measure of the dispersion of data that is shown in a box plot is referred to as the interquartile range. It is the difference between the upper and the lower quartile.
14) Differentiate between overfitting and underfitting.
Overfitting 
Underfitting 
Occurs when a model is trained with too much data. 
Occurs when a model is trained with too little data. 
Occurs due to data not being categorised properly due to too many details. 
Occurs when we try to build a linear model using nonlinear data 
When overfitting occurs, a model gets influenced by the noise and inaccuracies in the datasets. 
When underfitting occurs, a model is not able to capture the underlying trends of the data. 
An overfitted model has low bias and high variance 
An underfitted model has high bias and low variance. 
15) How can you handle missing values in a dataset?
Here are some ways in which missing values can be handled for in a dataset:
Deleting rows with missing values: Rows or columns which have null values can be deleted from the dataset that is to be used for analysis. In cases where some columns have more than half of the rows recorded as null or with no data, the entire column can simply be dropped. Similarly, rows with more than half the columns as null can also be dropped. This may however work poorly if a large number of values are missing.
Using Mean/Medians for missing values: Columns of the dataset which contain data of numeric data type which have missing values can be filled by calculating the mean, median or mode of the remaining values available for that particular column.
Imputation method for categorical data: when the data missing is from categorical columns, the missing value can be replaced with the most frequent category in the column. If there is a large number of missing values, a new categorical variable can be used for each of the missing values.
Last Observation carried Forward (LCOF) method: for data variables which have longitudinal behavior, the last valid observation can be used to fill in the missing value.
Using Algorithms that support missing values: some algorithms such as the kNN algorithm are able to ignore a column when a value is missing. Naive Bayes is another such algorithm. The RandomForest algorithm works well on nonlinear and categorical data.
Predicting the missing values: the regression or classification methods can be used to predict the missing data values, based on the nature of the missing value.
Using the Deep Learning Library ‘Datawig’ : Datawig is a library that can learn ML models using Deep Neural Networks to impute missing values into the dataset. Datawig works well with categorical, continuous and nonnumerical data.
16) Differentiate between data mining and data profiling.
Data mining 
Data Profiling 
It is the process of identifying patterns and correlations found in very large datasets. 
It is the process of analyzing data from existing datasets to determine the actual content of the data. 
Computerbased methodologies and mathematical algorithms are applied to extract information hidden in the data. 
Involves analysing raw data from existing datasets. 
The goal is to find actionable information from the data. 
The goal is to create a knowledge base of accurate information regarding the data. 
Some examples of data mining techniques are clustering, classification, forecasting, regression.

Data profiling involves structure discovery, structure analysis, content discovery, relationship discovery and analytical techniques. 
17) What is meant by A/B testing?
A/B testing is a randomized experiment performed on two variants, ‘A’ and ‘B.’ It involves a process of applying statistical hypothesis testing, also known as “twosample hypothesis testing,” to compare two different versions of a single variable. In this process, the subject’s response to variant A is evaluated against its response to variant B so that it can be determined which of the variants is more effective in achieving a particular outcome.
18) Differentiate between univariate, bivariate, and multivariate analysis.
Univariate analysis: Analysis of data that contains only one variable is performed. Univariate analysis is a simple form of analysis since it does not deal with relationships between variables and fluctuations that trigger each other. In univariate analysis, the purpose is to find patterns that exist within the data based on only one variable. The analysis generally involves reaching conclusions by performing central tendency measures  mean, median, and mode, and studying the distribution of data by examining the minimum and maximum values, range, quartiles, standard deviation, and variance by using visualization methods, including charts and tables.
Bivariate analysis: Bivariate analysis involves data that contains two different variables. The bivariate analysis aims to find the relationship between two variables and how they affect each other. Bivariate data analysis usually involves plotting data points using an Xaxis and a Yaxis on graphs to visualize and understand the data involved.
Multivariate analysis: Analysis of data that contains three or more different variables is referred to as multivariate analysis. Similar to bivariate analysis, the causes and relationships between variables have to be determined. However, the multivariate analysis contains more than one dependent variable. The method to perform multivariate analysis depends on the outcome expected. Some techniques used in multivariate analysis are regression analysis, factor analysis, MANOVA  multivariate analysis of variance, and path analysis.
19) What are the different types of hypothesis testing?
Hypothesis testing is a procedure used by statisticians or researchers to verify the accuracy of a particular hypothesis.
There are basically two types of hypothesis testing:
Null Hypothesis: The null hypothesis method works by stating the exact opposite of what is predicted or expected by the investigator. The null hypothesis states that there is no actual or exact relationship between the variables involved.
E.g if the hypothesis is that “climate change is caused by global warming”. The null hypothesis begins by stating that “climate change is not caused by global warming”.
20) What is time series analysis, and when is it used?
Time series analysis is a technique used to analyze a sequence of data points that are collected over a period of time. In a time series analysis, analysts are responsible for recording data points at regular intervals rather than recording the data intermittently or randomly. Time series analysis aims to identify trends in how variables change over time. For the results of time series analysis to be consistent and reliable, the analysis requires a large number of data points. Extensive data sets represent the data more accurately and make it easier to identify and eliminate noise in the data. They also ensure that patterns or trends that are identified are not outliers but are responsible for seasonal variance.
Time series analysis is used to identify the underlying causes that influence the trends that are seen in the data. It may also be used to perform predictive analysis using the observed data points and find the likelihood of future events. Time series analysis is performed on nonstationary data  where the data points fluctuate over a period of time. Some cases where time series analysis can be used are in the finance, retail and economic sectors where prices constantly fluctuate with time. Stock prices are also affected over time. Meteorologists, too, make use of time series analysis to predict weather reports.
21) What is an ROC curve?
A Receiver Operating Characteristic curve, or an ROC curve is a tool used to predict the probability of a binary outcome. It is a plot where the xaxis corresponds to the false positive rate and the yaxis represents the true positive rate for a number of different threshold values associated with a candidate. In simple terms, it plots the false alarm rate against the hit rate. The true positive rate is calculated by the formula:
True Positive Rate = True Positives/ (True Positives + False Negatives).
True positive rate can also be referred to as sensitivity
The false positive rate is calculated by the formula:
False Positive Rate = False Positives/ (False Positives + TrueNegatives).
False positive rate can also be referred to as the false alarm rate.
These are just some of the interview questions for a data analyst that are likely to be asked in an analytic job interview. Apart from this there could be several other interview questions asked around regression, correlation, probability, statistics, design of experiments, questions on Python or R or SAS programming , questions on distributed computing frameworks like Hadoop or Spark, etc. With the help of industry experts at ProjectPro , we have formulated a list of analytic interview questions around statistics, python, r , hadoop and spark that will help you prepare for your next data analyst job interview –
Recommended Reading:
1) Write a code snippet to print 10 random integers between 1 and 100 using NumPy.
import numpy
random_numbers = numpy.random.randint(1,100,10)
print(random_numbers) gives the following output in one scenario:
2) Explain how you can plot a sine graph using NumPy and Matplotlib libraries in Python.
NumPy has the sin() function, which takes an array of values and provides the sine value for them.
Using the numpy sin() function and the matplotlib plot()a sine wave can be drawn.
Given below is the code which can be used to plot a sine wave
The above code yields the following output:
Build an Awesome Job Winning Project Portfolio with Solved EndtoEnd Big Data Projects
1) How will you design a life for a 100 floor building? (Asked at Credit Suisse)
2) How will you find the nth number from last in a single linked list? (Asked at BlackRock)
3) How would you go about finding the differences between two sets of data ? ( Asked at EY)
4) What is the angle between the hour and the minute hand at 3:15 ? (Asked at EY)
If you are looking for Data Analyst positions now, you can check Jooble for openings.
1. In Excel, what is a waterfall chart, and when is it used?
In Excel, a waterfall chart is a type of column chart used to highlight how the starting position of a value can either increase or decrease based on a series of changes to reach a final value. In a typical waterfall chart, only the first and last columns represent the total values. The intermediate columns have a floating appearance and only show the positive and negative changes from one period to another.Waterfall charts are also known as ‘bridge charts’ since the floating columns appear similar to a bridge connecting the endpoints. In waterfall charts, columns are recommended to be colorcoded so that the starting and ending points and the intermediate columns can be more easily distinguished.
Waterfall charts are primarily used for analytical purposes to understand how an initial value is affected by other factors. Waterfall charts can be used to evaluate company profit and their product earnings. They can also be used to track budget changes within a particular project, perform an inventory analysis, or track specific value updates over a period of time.