Embark on a journey of knowledge! Take the quiz and earn valuable credits.
Take A QuizChallenge yourself and boost your learning! Start the quiz now to earn credits.
Take A QuizUnlock your potential! Begin the quiz, answer questions, and accumulate credits along the way.
Take A QuizPlease log in to access this content. You will be redirected to the login page shortly.
LoginGeneral Tech Technology & Software 2 years ago
Posted on 16 Aug 2022, this text provides information on Technology & Software related to General Tech. Please note that while accuracy is prioritized, the data presented might not be entirely correct or up-to-date. This information is offered for general knowledge and informational purposes only, and should not be considered as a substitute for professional advice.
No matter what stage you're at in your education or career, TuteeHUB will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.
Please log in to access this content. You will be redirected to the login page shortly.
LoginReady to take your education and career to the next level? Register today and join our growing community of learners and professionals.
manpreet
Best Answer
2 years ago
In order to understand how much GPU memory resources TensorFlow can economize, I did some experiments. My network is vgg-16 and dataset is mnist, batch size is 128. I estimate my GPU memory usage is 2.8GB. I close all the
RewriterConfig
s like:And I gradually reduce the amount of GPU memory usage.
I observed that TensorFlow's method is recomputer in lack of GPU memory. Why use recomputer technology instead of swapping technology? Is recompute faster than swapping?
In the
memory_optimizer.cc:1226
, it has theRecomputationRewritingPass
function. After I close the function to test, I find that TensorFlow has another block do the same event. Why does TensorFlow have two blocks to do the same event?Thanks.