Embark on a journey of knowledge! Take the quiz and earn valuable credits.
Take A QuizChallenge yourself and boost your learning! Start the quiz now to earn credits.
Take A QuizUnlock your potential! Begin the quiz, answer questions, and accumulate credits along the way.
Take A QuizKindly log in to use this feature. We’ll take you to the login page automatically.
LoginGeneral Tech Technology & Software 3 years ago
User submissions are the sole responsibility of contributors, with TuteeHUB disclaiming liability for accuracy, copyrights, or consequences of use; content is for informational purposes only and not professional advice.
No matter what stage you're at in your education or career, TuteeHUB will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.
Kindly log in to use this feature. We’ll take you to the login page automatically.
LoginReady to take your education and career to the next level? Register today and join our growing community of learners and professionals.
Your experience on this site will be improved by allowing cookies. Read Cookie Policy
Your experience on this site will be improved by allowing cookies. Read Cookie Policy
manpreet
Best Answer
3 years ago
In order to understand how much GPU memory resources TensorFlow can economize, I did some experiments. My network is vgg-16 and dataset is mnist, batch size is 128. I estimate my GPU memory usage is 2.8GB. I close all the
RewriterConfigs like:And I gradually reduce the amount of GPU memory usage.
I observed that TensorFlow's method is recomputer in lack of GPU memory. Why use recomputer technology instead of swapping technology? Is recompute faster than swapping?
In the
memory_optimizer.cc:1226, it has theRecomputationRewritingPassfunction. After I close the function to test, I find that TensorFlow has another block do the same event. Why does TensorFlow have two blocks to do the same event?Thanks.