Speak now
Please Wait Image Converting Into Text...
Embark on a journey of knowledge! Take the quiz and earn valuable credits.
Challenge yourself and boost your learning! Start the quiz now to earn credits.
Unlock your potential! Begin the quiz, answer questions, and accumulate credits along the way.
General Tech Bugs & Fixes 2 years ago
Posted on 16 Aug 2022, this text provides information on Bugs & Fixes related to General Tech. Please note that while accuracy is prioritized, the data presented might not be entirely correct or up-to-date. This information is offered for general knowledge and informational purposes only, and should not be considered as a substitute for professional advice.
Turn Your Knowledge into Earnings.
I have a server that receives backupname.tar.gz files in /home/my_user/drop directory every hour.
backupname.tar.gz
/home/my_user/drop
I installed incron utility and use incrontab -e entry to run a script whenever a new file shows up in /drop
Here is the script:
#!/bin/sh # # First clear the 2 immediate use directories rm /home/my_user/local_ready/* wait sleep 1 rm /home/my_user/local_restore/* wait sleep1 # Copy the file from the /drop /local_ready cp /home/my_user/drop/*.tar.gz /home/my_user/local_ready/ wait sleep 5 # Now move the file to the /current folder mv /home/my_user/drop/*.tar.gz /home/my_user/current/ wait sleep 1 # Next we delete any stray files dropped that are not # of the target type so we can keep /drop clean. rm /home/my_user/drop/* wait sleep 1 # Un-Tar the files into the /local_restore directory tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/ wait sleep 1 # This should complete the movement of files
The problem I have been running into is the file that gets copied to the /local_restore directory is truncated as if the next command in the script is causing an interruption to the cp command.
/local_restore
cp
At first I put sleep commands in it to try to get it to work, then I added wait commands after each command in the script to try to get it to work thinking that would force everything to wait until the cp command had finished copying the file to the next location.
I cannot even tell if the tar command is working at all because it depends on the success of the cpcommand further up the chain to have the file in place. Based on a test I ran with only a command to un-tar one of the files, I suspect it will not complete either before the script exits either. At least that occurred in a different 3 line test I used to test my timing theory.
tar
BTW... the mv command works just fine and the whole file gets moved as it should.
mv
Can anyone identify why the commands run in the script seem to be unable to complete their task?
I have been asked to show the contents of the incrontab entry so here it is:
/home/my_user/drop/ IN_CREATE /home/my_user/bin/cycle_backups
(cycle_backups is obviously the name of the script file)
This is a KVM type VPS cloud server running Ubuntu 16.04 LTS and it has 10gb of memory with over 100gb of disk space. When the file is dropped, this is the only thing the server has to do other than system idle!
I will admit that my server is a bit slow, so when trying to copy a 200mb file to another directory it takes a second or two to complete even when I do it right at the command line.
I am at a loss to explain the problem, which makes it even harder to identify a solution.
Fair Warning: I am not the best at any of this, but I didn't think this should be such an impossible thing to accomplish.
None of the calls to wait will do anything in your script as there are no background tasks. You may safely delete these.
wait
I would delete the calls to sleep as well. They will only delay the script execution at those points. A command will not start until the previous one has properly finished anyway. Also sleep1 is likely to generate a "command not found" error.
sleep
sleep1
The only real issue that I can see with your script is the last call to tar:
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
If there are multiple archives in /home/my_user/local_ready, then this command would extract the first one and try to extract the names of the other archives from that archive. The -f flag takes a single archive, and you can't really extract multiple archives at once.
/home/my_user/local_ready
-f
Instead, use a loop:
for archive in /home/my_user/local_ready/*.tar.gz; do tar -xzf "$archive" -C /home/my_user/local_restore/ done
I've ignored considerations of what happens if this script is run concurrently with itself. You mention that you have some facility to execute the script when a new file shows up, but it's unclear what would happen if two or more files showed up at about the same time. Since the script is handling all files in a single invocation, I'm pretty sure that two concurrently running script may well step on each other's toes.
Personally, I might instead run the script on a regular five minute interval. Alternatively use some form of locking to make sure that the script is not running while another copy of the script is already in progress (see e.g. "Correct locking in shell scripts?").
Here's my own rewrite of your code (not doing any form of locking):
#!/bin/sh -e cd /home/my_user # clear directories rm -f local_ready/* rm -f local_restore/* # Alternatively, remove directories completely # to also get rid of hidden files etc.: # # rm -rf local_ready; mkdir local_ready # rm -rf local_restore; mkdir local_restore # handle the archives, one by one for archive in drop/*.tar.gz; do tar -xzf "$archive" -C local_restore cp "$archive" current mv "$archive" local_ready done
This would clear out the directories of non-hidden names and then extract each archive. Once an archive has been extracted it would be copied to the local_ready directory, and then the archive would also be moved from drop to current.
local_ready
drop
current
I'm using sh -e to make the script terminate on errors, and I cd to the /home/my_user directory to avoid having long paths in the script (this also makes it easier to move the whole operation to a subdirectory or elsewhere later). I'm using rm -f for clearing out those directories as rm would complain if the * glob did not expand to anything.
sh -e
cd
/home/my_user
rm -f
rm
*
You could also obviously handle archive copying and extraction separately:
cp drop/*.tar.gz current mv drop/*.tar.gz local_ready for archive in local_ready/*.tar.gz; do tar -xzf "$archive" -C local_restore done
To save space, you may want to look into hard-linking the files in local_ready and current:
mv drop/*.tar.gz local_ready for archive in local_ready/*.tar.gz; do ln "$archive" current tar -xzf "$archive" -C local_restore done
No matter what stage you're at in your education or career, TuteeHub will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.
General Tech 10 Answers
General Tech 7 Answers
General Tech 3 Answers
General Tech 9 Answers
General Tech 2 Answers
Ready to take your education and career to the next level? Register today and join our growing community of learners and professionals.