-

Tuesday 17 April 2018

Machine learning and deep learning interview question and answers

Q: What is Bias-Variance Tradeoff?
A: You can find excellent explanation in this link: LINK
Q: What is Epoch?
A: Epoch is one feed-forward pass and one backward pass of all the training samples.

Q: What is feed-forward pass?
A: The forward pass refers to traversing training data through all neurons from the input layer to output layer.

Q: What is loss function/objective function/cost function?
A:  It is a function, used to define how close / far the predicted labels to ground truth labels. Example of loss functions like mean square error (mse), mean absolute error (mae), etc..

Q: What is backward pass?
A: backward pass refers to updating the weights of the neural network weights to such that it minimizes the loss function value of the feed-forward pass.

Q: What is batch?
A: When training samples are huge in number, all the training samples can't be used to train the neural network, because of the memory issues. To avoid this problem, total training samples are divided into small chunks (batch) and trained over iterations. The total number of samples in one batch is called batch size.

Q: What are iterations?
A:  iterations = (#total samples) / (#batch size)

Example: If we have 50000 samples and batch size is 200 then number of iterations = 50000/200 ==> 250.

Q: What is the model of deep/machine learning?
A: Model is set of weights and biases, which can map the input data/ features to the desired output (labels).

Q: What is supervised learning?
A: Supervised learning is a technique which uses training samples along with its ground truth labels to find the mapping function from training samples to label space.

Q: What is un-supervised learning?
A: Unsupervised learning is a technique which uses training samples without labels to find the underlining distribution of the samples used in training phase.

Q: What is classification?
A: In machine learning, classification is nothing but mapping the given input data/features to a finite set of classes.
Eg: classifying the e-mail into spam and not-spam

Q: What is regression?
A: In machine learning, regression is nothing but mapping the input data/features to a finite range of continuous real scale.
Eg: predicting the stock price

Q:  What is activation function?
A: It is a non-linear function (eg: relu, sigmoid etc.) that takes in the weighted sum of all of the inputs from the previous layer and maps to two possible levels (+1 and 0 or fire and not-fire).

Q: What is over fitting?
A: Overfitting is nothing but, training a model which captures the regularities in training sample but fails on testing samples (meaning the learned model is trying to memorize the training samples and fails on unseen/test samples).
Q: What is perceptron?
A: Perceptron is a basic building block of the neural network. it is an artificial neuron with a non-linear  function as the activation function.
Q: What is multilayer perceptron?
A: Multilayer perceptron (MLP) is a class of feedforward artificial neural network, it consists of at least three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. The

Q: What are model parameters?
A: model parameters are variables of machine learning models, which train on its own with the help of the training data over iterations. For example, the model parameters are like weights and bias of neural network are adjusted to best fit the traning data over iterations.

Q: What are hyperparameters?
A: Hyper-parameters are different from model parameters, they don't learn from training samples. They tweeked during successive runs of training a model. 
Examples: learning rate, regularization parameters etc.

Q: What is binary classification?
A: It is a two-class classification problem, mapping of input data/features into one of the possible classes. Where as multiclass classification have classes more than two.

True Positive (TP): if the sample (feature vector) is  possitive and classified as positive.
True Negative (TN): if the sample (feature vector) is negative and classified as negative.
False Positive (FP): If the sample (feature vector) is negative and classified as positive.
False Negative (FN): if the sample (feature vector) possitive and classified as negative.


Q: What is accuracy?
A: Accuracy = $\frac{\{TP+TN\}}{Total \: no.\: of \: examples}$

Q: What is precision?
A: Precision = $\frac{TP}{TP+FP}$

Q: What is recall?
A: Recall = $\frac{TP}{TP+FN}$
Q: What is batch normalization? 

Q: What is exploding and vanishing gradient problem?

Q: What are the different optimizers and their advantages and disadvantages?

Q: What are the different activation functions and their advanges and disadvantages?



13 comments:

  1. Great information about machine learning I have written one new post about Difference Between Artificial Intelligence and Machine learning you can also learn.

    ReplyDelete
  2. Thanks for sharing such a nice blog with full information, we are looking forward to see more blogs with updated content in future.

    For Data Science course check below.
    data science course in hyderabad
    data science Training in hyderabad

    ReplyDelete
  3. I liked this article very much. The content is very good. Keep posting.
    Python Online Training

    ReplyDelete
  4. Thank you for posting this blog really appreciate the efforts taken by you for the manner in which you have curated this information, if you want you can check out
    data science course in bangalore
    data science course

    ReplyDelete
  5. It’s hard to come by experienced people about this subject, but you seem
    like you know what you’re talking about! Thanks

    Block Chain Course in Bangalore

    ReplyDelete
  6. Chair international language its may player power.entertainment

    ReplyDelete