Embark on a journey of knowledge! Take the quiz and earn valuable credits.
Take A QuizChallenge yourself and boost your learning! Start the quiz now to earn credits.
Take A QuizUnlock your potential! Begin the quiz, answer questions, and accumulate credits along the way.
Take A QuizGeneral Tech Learning Aids/Tools 3 years ago
User submissions are the sole responsibility of contributors, with TuteeHUB disclaiming liability for accuracy, copyrights, or consequences of use; content is for informational purposes only and not professional advice.
No matter what stage you're at in your education or career, TuteeHUB will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.
Ready to take your education and career to the next level? Register today and join our growing community of learners and professionals.
Your experience on this site will be improved by allowing cookies. Read Cookie Policy
Your experience on this site will be improved by allowing cookies. Read Cookie Policy
manpreet
Best Answer
3 years ago
I am interested in the current state-of-the-art ways to use quick, greedy heuristics in order to speed up the learning in a Deep Q-Network in Reinforcement Learning. In classical RL, I initially set the Q-value for a state-action pair (S,a) based on the result of such a greedy heuristic run from state S with action a. Is this still a good idea in the setting of a neural network for the approximation of the Q-function, and if yes, what are the optimal ways of doing it? What are other ways of aiding the DQN with the knowledge from the greedy heuristics?
References to state-of-the-art papers would be highly appreciated.