Embark on a journey of knowledge! Take the quiz and earn valuable credits.
Take A QuizChallenge yourself and boost your learning! Start the quiz now to earn credits.
Take A QuizUnlock your potential! Begin the quiz, answer questions, and accumulate credits along the way.
Take A QuizPlease log in to access this content. You will be redirected to the login page shortly.
LoginGeneral Tech Learning Aids/Tools 3 years ago
User submissions are the sole responsibility of contributors, with TuteeHUB disclaiming liability for accuracy, copyrights, or consequences of use; content is for informational purposes only and not professional advice.
No matter what stage you're at in your education or career, TuteeHUB will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.
Please log in to access this content. You will be redirected to the login page shortly.
Login
Ready to take your education and career to the next level? Register today and join our growing community of learners and professionals.
Your experience on this site will be improved by allowing cookies. Read Cookie Policy
Your experience on this site will be improved by allowing cookies. Read Cookie Policy
manpreet
Best Answer
3 years ago
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term band-aid)
(2) The learnt features are sometimes better than the best hand-engineered features, and can be so complex (computer vision - e.g. face-like features) that it would take way too much human time to engineer.
(3) Can use unlabeled data to pre-train the network. Suppose we have 1000000 unlabeled images and 1000 labeled images. We can now drastically improve a supervised learning algorithm by pre-training on the 1000000 unlabeled images with deep learning. In addition, in some domains we have so much unlabeled data but labeled data is hard to find. An algorithm that can use this unlabeled data to improve classification is valuable.
(4) Empirically, smashed many benchmarks that were only seeing incremental improvements until the introduction of deep learning methods.
(5) Same algorithm works in multiple areas with raw (perhaps with minor pre-processing) inputs.
(6) Keeps improving as more data is fed to the network (assuming stationary distributions etc).