Find NPTEL Deep Learning – IIT Ropar Week 6 answers 2025 at nptel.answergpt.in. Our expert-verified solutions simplify complex topics, enhance your understanding of key concepts, and improve your assignment accuracy with confidence.
β
Subject: Deep Learning – IIT Ropar
π
Week: 6
π― Session: NPTEL 2025
π Course Link: Click Here
π Reliability: Verified and expert-reviewed answers
Use these answers as a reference to cross-check your solutions. For complete, step-by-step solutions across all weeks, explore [Week 1-12] NPTEL Deep Learning – IIT Ropar Assignment Answers 2025
π Stay ahead in your NPTEL journey with fresh, updated solutions every week!
Week-by-Week NPTEL Deep Learning – IIT Ropar Assignments Answers in One Place |
---|
Deep Learning – IIT Ropar Week 1 Answers |
Deep Learning – IIT Ropar Week 2 Answers |
Deep Learning – IIT Ropar Week 3 Answers |
Deep Learning – IIT Ropar Week 4 Answers |
Deep Learning – IIT Ropar Week 5 Answers |
Deep Learning – IIT Ropar Week 6 Answers |
Deep Learning – IIT Ropar Week 7 Answers |
Deep Learning – IIT Ropar Week 8 Answers |
Deep Learning – IIT Ropar Week 9 Answers |
Deep Learning – IIT Ropar Week 10 Answers |
Deep Learning – IIT Ropar Week 11 Answers |
Deep Learning – IIT Ropar Week 12 Answers |
NPTEL Deep Learning β IIT Ropar Week 6 Assignment Answers 2025
1. What is/are the primary advantages of Autoencoders over PCA?
- Autoencoders are less prone to overfitting than PCA.
- Autoencoders are faster and more efficient than PCA.
- Autoencoders require fewer input data than PCA.
- Autoencoders can capture nonlinear relationships in the input data.
Answer :- For Answers Click Here
2. Which of the following is a potential advantage of using an overcomplete autoencoder?
- Reduction of the risk of overfitting
- Faster training time
- Ability to learn more complex and nonlinear representations
- To compress the input data
Answer :-
3. We are given an autoencoder A. The average activation value of neurons in this network is 0.015. The given autoencoder is
- Contractive autoencoder
- Sparse autoencoder
- Overcomplete neural network
- Denoising autoencoder
Answer :- For Answers Click Here
4. Suppose we build a neural network for a 5-class classification task. Suppose for a single training example, the true label is [0 1 0 0 1] while the predictions by the neural network are [0.4 0.25 0.2 0.1 0.6]. What would be the value of cross-entropy loss for this example? (Answer up to two decimal places, Use base 2 for log-related calculations)
Answer :-
5. If an under-complete autoencoder has an input layer with a dimension of 5, what could be the possible dimension of the hidden layer?
- 5
- 4
- 2
- 0
- 6
Answer :-
6. Which of the following networks represents an autoencoder?




Answer :- For Answers Click Here
7. What is the primary reason for adding corruption to the input data in a denoising autoencoder?
- To increase the complexity of the model.
- To improve the modelβs ability to generalize to unseen data.
- To reduce the size of the training dataset.
- To increase the training time.
Answer :-
8. Suppose for one data point we have features x1,x2,x3,x4,x5 as β4,6,2.8,0,17.3 then, which of the following function should we use on the output layer(decoder)?
- Linear
- Logistic
- Relu
- Tanh
Answer :-
9. Which of the following statements about overfitting in overcomplete autoencoders is true?
- Reconstruction error is very high while training
- Reconstruction error is very low while training
- Network fails to learn good representations of input
- Network learns good representations of input
Answer :-
10. What is the purpose of a decoder in an autoencoder?
- To reconstruct the input data
- To generate new data
- To compress the input data
- To extract features from the input data
Answer :- For Answers Click Here
Deep Learning – IIT Ropar Week 6 NPTEL Assignment Answers 2024
1. Which of the following networks represents an autoencoder?

Answer :- b
2. If an under-complete autoencoder has an input layer with a dimension of 7, what could be the possible dimension of the hidden layer?
- 6
- 8
- 0
- 7
- 2
Answer :- a, e
3. What type of autoencoder is it when the hidden layerβs dimensionality is less than that of the input layer?
- Under-complete autoencoder
- Complete autoencoder
- Overcomplete autoencoder
- Sparse autoencoder
Answer :- a
4. Which of the following statements about regularization in autoencoders is always true?
- Regularisation reduces the search space of weights for the network.
- Regularisation helps to reduce the overfitting in overcomplete autoencoders.
- Regularisation shrinks the size of weight vectors learned.
- All of these.
Answer :- a, b
5. We are using the following autoencoder with linear encoder and linear decoder. The eigenvectors associated with the covariance matrix of our data X is (V1,V2,V3,V4,V5). What are the representations most likely to be learned by our hidden layer H? (Eigenvectors are written in decreasing order to the eigenvalues associated with them)

- V1,V2
- V4,V5
- V1,V3
- V1,V2,V3,V4,V5
Answer :- a
6. What is the purpose of a decoder in an autoencoder?
- To reconstruct the input data
- To generate new data
- To compress the input data
- To extract features from the input data
Answer :- a
7. What are the advantages of using a denoising autoencoder?
- Robustness to noisy input data
- Reduction of the risk of overfitting
- Faster training time
- It promotes sparsity in the hidden layer
Answer :- a, b
8. We are given an autoencoder A. The average activation value of neurons in this network is 0.06. The given autoencoder is:
- Contractive autoencoder
- Overcomplete neural network
- Sparse autoencoder
- Denoising autoencoder
Answer :- c
9. What are the possible applications of autoencoders?
- Data Compression
- Extraction of important features
- Reducing noise
- All of these
Answer :- d
10. Which of the following is a potential disadvantage of using autoencoders for dimensionality reduction over PCA?
- Autoencoders are computationally expensive and may require more training data than PCA.
- Autoencoders are bad at capturing complex relationships in data
- Autoencoders may overfit the training data and generalize poorly to new data.
- Autoencoders are unable to handle linear relationships between data.
Answer :- a, c
Conclusion:
In this article, we have uploaded the Deep Learning – IIT Ropar Week 6 NPTEL Assignment Answers. These expert-verified solutions are designed to help you understand key concepts, simplify complex topics, and enhance your assignment performance. Stay tuned for weekly updates and visit www.answergpt.in for the most accurate and detailed solutions.