Save desktop backround to file in windows 10

save following lines to screensaver.bat to anywhere you like, then run:

 


mkdir %userprofile%\desktop\CK_DESK

cd %LocalAppData%\Packages\Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy\LocalState\Assets

copy *.* %userprofile%\desktop\CK_DESK

cd %userprofile%\desktop\CK_DESK

ren *.* *.jpg

 

Advertisements

GIT related stuffs

#gitpull.sh


#!/bin/bash

source /etc/profile
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:~/bin
export PATH
export TERM=${TERM:-dumb}

#----------------------------------------
# Please set the following variable section
# Please set up working directories, use','split
# eg:path="/root/test/path1,/root/test/path2"
path=""
#----------------------------------------

# Do not edit the following section

# Check if user is root
[ $(id -u) != "0" ] && { echo "${CFAILURE}Error: You must run this script as root.${CEND}"; exit 1; } 2>&1

# Check if directory path exists
if [[ "${path}" = "" ]]; then 
echo "${CFAILURE}Error: You must set the correct directory path.Exit.${CEND}" 2>&1
exit 1
fi

# Check if command git exists
if ! [ -x "$(command -v git)" ]; then
echo "${CFAILURE}Error: You may not install the git.Exit.${CEND}" 2>&1
exit 1
fi

# Check where is command git
git_path=`which git`

# Start to deal the set dir
OLD_IFS="$IFS" 
IFS="," 
dir=($path) 
IFS="$OLD_IFS"

echo "Start to execute this script." 2>&1

for every_dir in ${dir[@]} 
do 
cd ${every_dir}
work_dir=`pwd`
echo "---------------------------------" 2>&1
echo "Start to deal" ${work_dir} 2>&1
${git_path} pull
echo "---------------------------------" 2>&1
done

echo "All done,thanks for your use." 2>&1

 

#gitwatch.sh


#!/usr/bin/env bash
#
# gitwatch - watch file or directory and git commit all changes as they happen
#
# Copyright (C) 2013 Patrick Lehner
# with modifications and contributions by:
# - Matthew McGowan
# - Dominik D. Geyer
#
#############################################################################
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
#############################################################################
#
# Idea and original code taken from http://stackoverflow.com/a/965274
# (but heavily modified by now)
#
# Requires the command 'inotifywait' to be available, which is part of
# the inotify-tools (See https://github.com/rvoicilas/inotify-tools ),
# and (obviously) git.
# Will check the availability of both commands using the `which` command
# and will abort if either command (or `which`) is not found.
#

REMOTE=""
BRANCH=""
SLEEP_TIME=2
DATE_FMT="+%Y-%m-%d %H:%M:%S"
COMMITMSG="Scripted auto-commit on change (%d) by gitwatch.sh"

shelp () { # Print a message about how to use this script
echo "gitwatch - watch file or directory and git commit all changes as they happen"
echo ""
echo "Usage:"
echo "${0##*/} [-s ] [-d ] [-r  [-b ]]"
echo " [-m ] "
echo ""
echo "Where  is the file or folder which should be watched. The target needs"
echo "to be in a Git repository, or in the case of a folder, it may also be the top"
echo "folder of the repo."
echo ""
echo " -s  after detecting a change to the watched file or directory,"
echo " wait  seconds until committing, to allow for more"
echo " write actions of the same batch to finish; default is 2sec"
echo " -d  the format string used for the timestamp in the commit"
echo " message; see 'man date' for details; default is "
echo " \"+%Y-%m-%d %H:%M:%S\""
echo " -r  if defined, a 'git push' to the given  is done after"
echo " every commit"
echo " -b  the branch which should be pushed automatically;"
echo " - if not given, the push command used is 'git push ',"
echo " thus doing a default push (see git man pages for details)"
echo " - if given and"
echo " + repo is in a detached HEAD state (at launch)"
echo " then the command used is 'git push  '"
echo " + repo is NOT in a detached HEAD state (at launch)"
echo " then the command used is"
echo " 'git push  :' where"
echo "  is the target of HEAD (at launch)"
echo " if no remote was define with -r, this option has no effect"
echo " -m  the commit message used for each commit; all occurences of"
echo " %d in the string will be replaced by the formatted date/time"
echo " (unless the  specified by -d is empty, in which case %d"
echo " is replaced by an empty string); the default message is:"
echo " \"Scripted auto-commit on change (%d) by gitwatch.sh\""
echo ""
echo "As indicated, several conditions are only checked once at launch of the"
echo "script. You can make changes to the repo state and configurations even while"
echo "the script is running, but that may lead to undefined and unpredictable (even"
echo "destructive) behavior!"
echo "It is therefore recommended to terminate the script before changin the repo's"
echo "config and restarting it afterwards."
echo ""
echo "By default, gitwatch tries to use the binaries \"git\" and \"inotifywait\","
echo "expecting to find them in the PATH (it uses 'which' to check this and will"
echo "abort with an error if they cannot be found). If you want to use binaries"
echo "that are named differently and/or located outside of your PATH, you can define"
echo "replacements in the environment variables GW_GIT_BIN and GW_INW_BIN for git"
echo "and inotifywait, respectively."
}

stderr () {
echo $1 >&2
}

while getopts b:d:hm:p:r:s: option # Process command line options
do
case "${option}" in
b) BRANCH=${OPTARG};;
d) DATE_FMT=${OPTARG};;
h) shelp; exit;;
m) COMMITMSG=${OPTARG};;
p|r) REMOTE=${OPTARG};;
s) SLEEP_TIME=${OPTARG};;
esac
done

shift $((OPTIND-1)) # Shift the input arguments, so that the input file (last arg) is $1 in the code below

if [ $# -ne 1 ]; then # If no command line arguments are left (that's bad: no target was passed)
shelp # print usage help
exit # and exit
fi

is_command () { # Tests for the availability of a command
which $1 &>/dev/null
}

# if custom bin names are given for git or inotifywait, use those; otherwise fall back to "git" and "inotifywait"
if [ -z "$GW_GIT_BIN" ]; then GIT="git"; else GIT="$GW_GIT_BIN"; fi
if [ -z "$GW_INW_BIN" ]; then INW="inotifywait"; else INW="$GW_INW_BIN"; fi

# Check availability of selected binaries and die if not met
for cmd in "$GIT" "$INW"; do
is_command $cmd || { stderr "Error: Required command '$cmd' not found." ; exit 1; }
done
unset cmd

# Expand the path to the target to absolute path
IN=$(readlink -f "$1")

if [ -d $1 ]; then # if the target is a directory
TARGETDIR=$(sed -e "s/\/*$//" << /dev/null << /dev/null)
if [ $? -eq 0 ]; then # HEAD is not detached
PUSH_CMD="$GIT push $REMOTE $(sed "s_^refs/heads/__" <<< "$HEADREF"):$BRANCH"
else # HEAD is detached
PUSH_CMD="$GIT push $REMOTE $BRANCH"
fi
fi
else
PUSH_CMD="" # if not remote is selected, make sure push command is empty
fi

# main program loop: wait for changes and commit them
while true; do
$INCOMMAND # wait for changes
sleep $SLEEP_TIME # wait some more seconds to give apps time to write out all changes
if [ -n "$DATE_FMT" ]; then
FORMATTED_COMMITMSG="$(sed "s/%d/$(date "$DATE_FMT")/" <<< "$COMMITMSG")" # splice the formatted date-time into the commit message
fi
cd $TARGETDIR # CD into right dir
STATUS=$($GIT status -s)
if [ -n "$STATUS" ]; then # only commit if status shows tracked changes.
$GIT add $GIT_ADD_ARGS # add file(s) to index
$GIT commit $GIT_COMMIT_ARGS -m"$FORMATTED_COMMITMSG" # construct commit message and commit

if [ -n "$PUSH_CMD" ]; then $PUSH_CMD; fi
fi
done

 

 

 

Ref.:

https://stackoverflow.com/questions/4414140/git-auto-pull-using-cronjob

https://github.com/gitwatch/gitwatch/blob/master/gitwatch.sh

 

using gdrive commandline to upload files to remove drive

I’ve been looking at several linux projects here recently and you’ll need to be sure you are backing them up.  I wanted to quickly backup a compressed copy of my project install and so I went looking for a super easy way to upload a file to Google Drive, and I found it with gdrive.

gdrive, not to be mistaken for Google Drive itself, is a command line tool by Petter Rasmussen for Linux, Windows and OSX. Just what I needed. It’s proved itself so useful that I can’t imagine how I lived without it.

Linux

  1. SSH on to your linux box and download the Linux version of gdrive from GitHub.
cd ~
wget https://docs.google.com/uc?id=0B3X9GlR6EmbnWksyTEtCM0VfaFE&export=download


2. You should see a file in your home directory called something listuc=0B3X9GlR6EmbnWksyTEtCM0VfaFE. Rename this file to gdrive.

#mv uc\?id\=0B3X9GlR6EmbnWksyTEtCM0VfaFE gdrive

3. Assign this file executable rights.

# chmod +x gdrive

4. Install the file to your usr folder.

# sudo install gdrive /usr/local/bin/gdrive

5. You’ll need to tell Google Drive to allow this program to connect to your account. To do this, run the gdrive program with any parameter and copy the text it gives you to your browser. Then paste in to your SSH window the response code that Google gives you.

Run the following ligne.

# gdrive about

6. YOU ARE DONE! Now you can upload files as required.

# gdrive upload backups.tar.gz

 

Tips.


# gdrive upload --parent  -r 

 

Ref.:

https://olivermarshall.net/how-to-upload-a-file-to-google-drive-from-the-command-line/

https://github.com/prasmussen/gdrive

 

Add periodic task to Ubuntu using cron/crobtab

Cron

This file is an introduction to cron, it covers the basics of what cron does,
and how to use it.

What is cron?

Cron is the name of program that enables unix users to execute commands or
scripts (groups of commands) automatically at a specified time/date. It is
normally used for sys admin commands, like makewhatis, which builds a
search database for the man -k command, or for running a backup script, 
but can be used for anything. A common use for it today is connecting to 
the internet and downloading your email.

This file will look at Vixie Cron, a version of cron authored by Paul Vixie.

How to start Cron

Cron is a daemon, which means that it only needs to be started once, and will 
lay dormant until it is required. A Web server is a daemon, it stays dormant 
until it gets asked for a web page. The cron daemon, or crond, stays dormant 
until a time specified in one of the config files, or crontabs.

On most Linux distributions crond is automatically installed and entered into 
the start up scripts. To find out if it's running do the following:

cog@pingu $ ps aux | grep crond
root       311  0.0  0.7  1284  112 ?        S    Dec24   0:00 crond
cog       8606  4.0  2.6  1148  388 tty2     S    12:47   0:00 grep crond

The top line shows that crond is running, the bottom line is the search
we just run.

If it's not running then either you killed it since the last time you rebooted,
or it wasn't started.

To start it, just add the line crond to one of your start up scripts. The 
process automatically goes into the back ground, so you don't have to force
it with &. Cron will be started next time you reboot. To run it without 
rebooting, just type crond as root:

root@pingu # crond

With lots of daemons, (e.g. httpd and syslogd) they need to be restarted 
after the config files have been changed so that the program has a chance 
to reload them. Vixie Cron will automatically reload the files after they 
have been edited with the crontab command. Some cron versions reload the
files every minute, and some require restarting, but Vixie Cron just loads 
the files if they have changed.

Using cron

There are a few different ways to use cron (surprise, surprise). 

In the /etc directory you will probably find some sub directories called 
'cron.hourly', 'cron.daily', 'cron.weekly' and 'cron.monthly'. If you place 
a script into one of those directories it will be run either hourly, daily, 
weekly or monthly, depending on the name of the directory. 

If you want more flexibility than this, you can edit a crontab (the name 
for cron's config files). The main config file is normally /etc/crontab.
On a default RedHat install, the crontab will look something like this:

root@pingu # cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly

The first part is almost self explanatory; it sets the variables for cron.

SHELL is the 'shell' cron runs under. If unspecified, it will default to 
the entry in the /etc/passwd file.

PATH contains the directories which will be in the search path for cron 
e.g if you've got a program 'foo' in the directory /usr/cog/bin, it might 
be worth adding /usr/cog/bin to the path, as it will stop you having to use
the full path to 'foo' every time you want to call it.

MAILTO is who gets mailed the output of each command. If a command cron is 
running has output (e.g. status reports, or errors), cron will email the output 
to whoever is specified in this variable. If no one if specified, then the 
output will be mailed to the owner of the process that produced the output.

HOME is the home directory that is used for cron. If unspecified, it will 
default to the entry in the /etc/passwd file.

Now for the more complicated second part of a crontab file.
An entry in cron is made up of a series of fields, much like the /etc/passwd
file is, but in the crontab they are separated by a space. There are normally
seven fields in one entry. The fields are:

minute hour dom month dow user cmd

minute	This controls what minute of the hour the command will run on,
	 and is between '0' and '59'
hour	This controls what hour the command will run on, and is specified in
         the 24 hour clock, values must be between 0 and 23 (0 is midnight)
dom	This is the Day of Month, that you want the command run on, e.g. to
	 run a command on the 19th of each month, the dom would be 19.
month	This is the month a specified command will run on, it may be specified
	 numerically (0-12), or as the name of the month (e.g. May)
dow	This is the Day of Week that you want a command to be run on, it can
	 also be numeric (0-7) or as the name of the day (e.g. sun).
user	This is the user who runs the command.
cmd	This is the command that you want run. This field may contain 
	 multiple words or spaces.

If you don't wish to specify a value for a field, just place a * in the 
field.

e.g.
01 * * * * root echo "This command is run at one min past every hour"
17 8 * * * root echo "This command is run daily at 8:17 am"
17 20 * * * root echo "This command is run daily at 8:17 pm"
00 4 * * 0 root echo "This command is run at 4 am every Sunday"
* 4 * * Sun root echo "So is this"
42 4 1 * * root echo "This command is run 4:42 am every 1st of the month"
01 * 19 07 * root echo "This command is run hourly on the 19th of July"

Notes:

Under dow 0 and 7 are both Sunday.

If both the dom and dow are specified, the command will be executed when
either of the events happen. 
e.g.
* 12 16 * Mon root cmd
Will run cmd at midday every Monday and every 16th, and will produce the 
same result as both of these entries put together would:
* 12 16 * * root cmd
* 12 * * Mon root cmd

Vixie Cron also accepts lists in the fields. Lists can be in the form, 1,2,3 
(meaning 1 and 2 and 3) or 1-3 (also meaning 1 and 2 and 3).
e.g.
59 11 * * 1,2,3,4,5 root backup.sh
Will run backup.sh at 11:59 Monday, Tuesday, Wednesday, Thursday and Friday,
as will:
59 11 * * 1-5 root backup.sh 

Cron also supports 'step' values.
A value of */2 in the dom field would mean the command runs every two days
and likewise, */5 in the hours field would mean the command runs every 
5 hours.
e.g. 
* 12 10-16/2 * * root backup.sh
is the same as:
* 12 10,12,14,16 * * root backup.sh

*/15 9-17 * * * root connection.test
Will run connection.test every 15 mins between the hours or 9am and 5pm

Lists can also be combined with each other, or with steps:
* 12 1-15,17,20-25 * * root cmd
Will run cmd every midday between the 1st and the 15th as well as the 20th 
and 25th (inclusive) and also on the 17th of every month.
* 12 10-16/2 * * root backup.sh
is the same as:
* 12 10,12,14,16 * * root backup.sh

When using the names of weekdays or months, it isn't case sensitive, but only
the first three letters should be used, e.g. Mon, sun or Mar, jul.

Comments are allowed in crontabs, but they must be preceded with a '#', and
must be on a line by them self.  


Multiuser cron

As Unix is a multiuser OS, some of the apps have to be able to support 
multiple users, cron is one of these. Each user can have their own crontab
file, which can be created/edited/removed by the command crontab. This
command creates an individual crontab file and although this is a text file,
as the /etc/crontab is, it shouldn't be edited directly. The crontab file is
often stored in /var/spool/cron/crontabs/ (Unix/Slackware/*BSD), 
/var/spool/cron/ (RedHat) or /var/cron/tabs/ (SuSE), 
but might be kept elsewhere depending on what Un*x flavor you're running.

To edit (or create) your crontab file, use the command crontab -e, and this
will load up the editor specified in the environment variables EDITOR or 
VISUAL, to change the editor invoked on Bourne-compliant shells, try: 
cog@pingu $ export EDITOR=vi
On C shells:
cog@pingu $ setenv EDITOR vi
You can of course substitute vi for the text editor of your choice.

Your own personal crontab follows exactly the same format as the main
/etc/crontab file does, except that you need not specify the MAILTO 
variable, as this entry defaults to the process owner, so you would be mailed
the output anyway, but if you so wish, this variable can be specified.
You also need not have the user field in the crontab entries. e.g.

min hr dom month dow cmd

Once you have written your crontab file, and exited the editor, then it will
check the syntax of the file, and give you a chance to fix any errors.

If you want to write your crontab without using the crontab command, you can
write it in a normal text file, using your editor of choice, and then use the
crontab command to replace your current crontab with the file you just wrote.
e.g. if you wrote a crontab called cogs.cron.file, you would use the cmd

cog@pingu $ crontab cogs.cron.file

to replace your existing crontab with the one in cogs.cron.file.

You can use 

cog@pingu $ crontab -l 

to list your current crontab, and

cog@pingu $ crontab -r

will remove (i.e. delete) your current crontab.

Privileged users can also change other user's crontab with:

root@pingu # crontab -u  

and then following it with either the name of a file to replace the 
existing user's crontab, or one of the -e, -l or -r options.

According to the documentation the crontab command can be confused by the 
su command, so if you running a su'ed shell, then it is recommended you 
use the -u option anyway.

Controlling Access to cron

Cron has a built in feature of allowing you to specify who may, and who 
may not use it. It does this by the use of /etc/cron.allow and /etc/cron.deny
files. These files work the same way as the allow/deny files for other 
daemons do. To stop a user using cron, just put their name in cron.deny, to
allow a user put their name in the cron.allow. If you wanted to prevent all
users from using cron, you could add the line ALL to the cron.deny file:

root@pingu # echo ALL >>/etc/cron.deny

If you want user cog to be able to use cron, you would add the line cog 
to the cron.allow file:

root@pingu # echo cog >>/etc/cron.allow

If there is neither a cron.allow nor a cron.deny file, then the use of cron
is unrestricted (i.e. every user can use it).  If you were to put the name of
some users into the cron.allow file, without creating a cron.deny file, it
would have the same effect as creating a cron.deny file with ALL in it.
This means that any subsequent users that require cron access should be 
put in to the cron.allow file.  

Output from cron

As I've said before, the output from cron gets mailed to the owner of the
process, or the person specified in the MAILTO variable, but what if you
don't want that? If you want to mail the output to someone else, you can
just pipe the output to the command mail.
e.g.
 
cmd | mail -s "Subject of mail" user

If you wish to mail the output to someone not located on the machine, in the
above example, substitute user for the email address of the person who 
wishes to receive the output.

If you have a command that is run often, and you don't want to be emailed 
the output every time, you can redirect the output to a log file (or 
/dev/null, if you really don't want the output).
e,g

cmd >> log.file

Notice we're using two > signs so that the output appends the log file and 
doesn't clobber previous output.
The above example only redirects the standard output, not the standard error,
if you want all output stored in the log file, this should do the trick:

cmd >> logfile 2>&1

You can then set up a cron job that mails you the contents of the file at
specified time intervals, using the cmd:

mail -s "logfile for cmd"  

Now you should be able to use cron to automate things a bit more.
A future file going into more detail, explaining the differences between 
the various different crons and with more worked examples, is planned.


Additional Reference:

Man pages: cron(8) crontab(5) crontab(1)

Book: _Running Linux_ (O'Reilly ISBN: 1-56592-469-X)

cron으로 프로세스 관리하기 

cron은 분단위로 다양한 규칙을 적용하여 스케쥴을 운영할 수 있는 프로그램입니다. cron은 crontab을 통해서 스케쥴을 등록할 수 있는데 그 문법은 아래 그림과 같습니다. 만일 매분마다 실행하는 스케쥴이 필요하다면 * * * * * 와 같이 * 다섯개를 기록하면 됩니다.

기본적인 아이디어는 이렇습니다. 매 분마다 해당 프로세스가 살아있는지 확인하고 만일 프로세스가 다운되었으면 해당 프로세스를 재시작하게 하는 겁니다. 이를 위해서 프로세스가 살아있는지 확인하는 방법이 필요합니다. 이를 위해 가장 적합한 방법은 pgrep 명령을 쓰는 겁니다.

다음과 같은 쉘파일을 만듭니다.

#!/bin/sh
MYPROC=”/path/to/process -i options”
/usr/bin/pgrep  -f  -x  “$MYPROC”  >  /dev/null  2>&1  ||  $MYPROC

위와 같이 작성하고 myproc.sh 이라는 이름으로 저장합니다. 위 스크립트는 MYPROC에 정의된 프로세스가 있는지 pgrep으로 확인하고 없으면 MYPROC을 실행하여 프로세스를 실행하게 됩니다. pgrep의 -f -x 옵션은 프로세스 이름을 대조할 때 전체(-f, full)를 사용하고 정확해야(-x, exact)함을 지시합니다.

“/dev/null” 2>&1 구문은 stderr(2) 출력을 stdout(1)로 보내라는 의미이며, 이것을 null 디바이스로 보내 표준출력으로 나오는 메시지들을 무시하게 됩니다. 관리하는 프로세스가 별도의 로그파일 기록 기능이 있다면 이렇게 처리해주면 됩니다.

스크립트에서 “||” 연산자는 앞에서 수행한 커맨드의 리턴값이 0이 아니면 (즉 실패라면) || 뒤의 명령을 실행하라는 겁니다. 만일 프로세스가 죽어 있다면 pgrep의 결과는 실패여서 0이 아닌 값이 나올 것이고, 그러면 || 뒤의 명령을 실행하게 되는 겁니다. ||과 반대되는 역할은 && 연산자입니다.

만일 관리하는 프로세스가 여러개라면 위의 쉘 스크립트에 같은 방식으로 계속 추가하면 됩니다.

마지막으로 crontab에 이 쉘을 등록하면 됩니다. “crontab -e”를 실행하여 다음과 같이 스케쥴을 등록합니다.

*  *  *  *  *   /bin/sh  /home/me/myproc.sh

이렇게 하면 끝입니다. 1분을 기다려서 제대로 프로세스가 뜨는지 확인하고, 이후 강제로 죽인 다음 프로세스가 1분내에 재시작되는지 확인해 보면 됩니다.

만일 매일 새벽 02:00에 프로세스를 죽이고 싶다면 pkill 커맨드를 이용하여 비슷한 방법으로 crontab에 등록하면 됩니다.

거의 모든 리눅스 및 유닉스 시스템에서 cron을 지원하기 때문에 이 방법을 이용하면 간편하고 동일한 방법으로 프로세스를 무인 관리할 수 있습니다.

1. 크론탭 기본 (crontab basic)

일단 기본이 되는 크론탭 사용법을 한 번 볼까요. 리눅스 쉘에서 다음처럼 입력합니다.

$ crontab -e

그러면 뭔가 편집할 수 있는 곳이 로딩됩니다. 이곳이 바로 크론탭을 설정할 수 있는 장소이죠. 여기에 각종 크론탭 명령어를 입력후 콜론(:) 입력 후에 wq 를 입력해 크론탭을 갱신시킵시다.

반대로 현재 크론탭에 어떤 내용이 들어있는지 보려면 다음처럼 입력하세요.

$ crontab -l

그러면 cat 명령어로 파일을 읽어들인 것처럼 표준 출력으로 크론탭 내용이 나오게 됩니다. 그런데 만약(거의 없겠지만) 크론탭을 지우고 싶다면?

리눅스 쉘에 다음처럼 입력합니다.

$ crontab -d

이렇게까지 하면 크론탭의 기본 설정 및 확인, 삭제에 대해 배운 것입니다. 그러면 실제로 크론탭에 크론 하나를 예제로 등록 해봅시다.

다음처럼 crontab -e 입력 후 다음과 같은 내용을 입력합시다. 저장은 vi 처럼 콜론 (:) 입력 후 wq 로 갱신시켜주면 됩니다.

* * * * * ls -al

별이 다섯개나 있습니다. 그리고 뒤에는 명령어가 적혀 있네요. 이게 기본 사용법입니다. 물론 쉘스크립트 뿐만 아니라 리눅스 커맨드도 사용할 수 있습니다.여기서는 쉘스크립트를 사용하는 방법으로 설명하고 있습니다.

별이 다섯개 있는 경우엔 “매분마다 실행” 하는겁니다. 별이 지칭하는 것이 무엇인지 자세히 살펴봅시다. * 그 전에 위에 입력했던 크론잡은 다시 지우시기 바랍니다.

2. 주기 결정

*      *      *      *      *
분(0-59)  시간(0-23)  일(1-31)  월(1-12)   요일(0-7)

각 별 위치에 따라 주기를 다르게 설정 할 수 있습니다. 순서대로 분-시간-일-월-요일 순입니다. 그리고 괄호 안의 숫자 범위 내로 별 대신 입력 할 수 있습니다.

요일에서 0과 7은 일요일입니다. 1부터 월요일이고 6이 토요일입니다.

5. 크론 로깅 (cron logging)

크론탭을 사용해서 정기적으로 작업을 처리하는 것은 좋은데, 해당 처리 내역에 대해 로그를 남기고 싶을 때가 있을겁니다. 그럴때 다음처럼 한번 써봅시다.

* * * * * /home/script/test.sh > /home/script/test.sh.log 2>&1

위처럼 작성하면 매분마다 test.sh.log 파일이 갱신 되어 작업 내용이 어떻게 처리 되었는지 알 수 있습니다. 만약 2>&1 을 제거하면 쉘스크립트에서 표준 출력 내용만 나옵니다. 2>&1은 이곳에서 확인합시다.

그런데, 이게 너무 자주 실행 되고 또한 지속적으로 로깅이 되야 해서 로그를 계속 남겨둬야 한다면 다음처럼 입력합니다.

* * * * * /home/script/test.sh >> /home/script/test.sh.log 2>&1

그러면 계속 로그가 누적이 되는 것을 확인 할 수 있을겁니다. 대신 로그가 과도하게 쌓이면 리눅스 퍼포먼스에 영향을 주므로 가끔씩 비워주거나 파일을 새로 만들어주는 센스가 필요합니다.

반대로 로그는 필요 없는 크론을 위해선 다음처럼 입력합니다.

* * * * * /home/script/test.sh > /dev/null 2>&1

6. 크론탭 백업 (crontab backup)

자, 혹시라도 crontab -d 를 쓰거나 실수로 crontab 디렉토리를 날려버려서 기존 크론 내역들이 날아갔을때, 정말 황망할거 같은데 말이죠. 그러니 주기적으로 크론탭을 백업해 둡시다. 백업은 다음처럼 하는 방법이 있습니다.

crontab -l > /home/bak/crontab_bak.txt

크론탭 내용을 txt 파일로 만들어 저장해두는겁니다. 자, 이것도 자동화가 될 수 있을까요?

50 23 * * * crontab -l > /home/bak/crontab_bak.txt

처럼 하면 되겠죠? 매일 오후 11시 50분에 크론탭을 백업해두는 크론 명령어입니다. 🙂

간단하게 크론탭에 대해 알아봤습니다. 혹시 추가해야 될 내용이나 실제와 다른 부분이 있다면 댓글 부탁드립니다.

Verify that a cron job has completed:

# grep scriptname /var/log/syslog

To GIT/Repo, need to add full path of git and repo:

</pre>
<pre><code>*/1 * * * * su -s /bin/sh nobody -c 'cd ~heilee/www &amp;&amp; git pull -q origin master' &gt;&gt; ~/git.log</code></pre>
<pre>01 9 * * 1,2,3,4,5 /home/jiafei427/proteus_work/ck_pivi_slave.sh

Periodically delete log files:

  1. delete log files which past three days:
    # find /usr/local/apache-tomcat-7.0.34/logs -mtime +7 -name catalina\* -exec rm {} \;
    
  2. Create shell scripts to delete all related logs:
    #!/bin/sh
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin
    
    find /usr/local/apache-tomcat-7.0.34/logs -mtime +7 -name catalina\* -exec rm {} \;
    find /usr/local/apache-tomcat-7.0.34/logs -mtime +31 -name localhost_access_log\* -exec rm {} \;
    find /usr/local/apache-tomcat-7.0.34/logs -mtime +14 -name host-manager\* -exec rm {} \;
    find /usr/local/apache-tomcat-7.0.34/logs -mtime +14 -name manager\* -exec rm {} \;<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;">&#65279;</span>
    
  3. Add it to Crontab
    # crontab -e
    # 0   1  *  *  *    /root/del_logs.sh
    

Ref.:

http://www.unixgeeks.org/security/newbie/unix/cron-1.html

https://soooprmx.com/archives/6786

http://btsweet.blogspot.com/2014/06/cron-manage-process.html

https://jdm.kr/blog/2

https://docs.oracle.com/cd/E24846_01/html/E23088/sysrescron-24589.html

https://docs.oracle.com/cd/E38901_01/html/E23088/sysrescron-1.html

http://btsweet.blogspot.com/2014/01/delete-old-logs.html

Configuring SendMail for Gmail relay in Ubuntu 16.04

  • Pre-req: I am using Ubuntu 16.04 LTS.
  • Pre-req: Google Gmail account. I am using Google Apps for Business with a dedicated mail relay account.

# apt-get install sendmail mailutils
# nano /etc/mail/gmailauth

In the auth file that you just nano’d, include this info in the file:

# AuthInfo: "U:root" "I:[email protected]" "P:password"

CTRL+O to write changes (save file) and CTRL+X to exit.

CD to /etc/mail, then;


# makemap hash gmailauth < gmailauth
# cd ..
# nano sendmail.mc

In the sendmail.mc file, CTRL+W to find “MAILER”. Press enter. Immediately above “MAILER_DEFINITIONS”, copy and paste the following:

define(`SMART_HOST',`[smtp.gmail.com]')dnl
define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl
define(`ESMTP_MAILER_ARGS', `TCP $h 587')dnl
define(`confAUTH_OPTIONS', `A p')dnl
TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
FEATURE(`authinfo',`hash -o /etc/mail/gmailauth.db')dnl

Then:

# make -C /etc/mail
# sudo /etc/init.d/sendmail reload

Configure /etc/hosts file: vi /etc/hosts
Make sure the line looks like this: 127.0.0.1 localhost yourhostname

Run Sendmail’s config and answer ‘Y’ to everything:

# sudo sendmailconfig

Switch away from Linux for a minute, fire up your browser, and switch to your email relay account on Gmail.com. Once logged in, go to this URL: https://www.google.com/settings/security/lesssecureapps. You’ll need to enable less secure mode (or at least I had to in order for the relay to work with Google Apps for Business). Once enabled, switch back over to your Linux config.

Run the following and modify appropriately the email address you want to send to:

# echo "[email protected]" | mail -s "This is just a test" someone@gmail.com
OR
# sendmail user@example.com  < /tmp/email.txt

At this point, you should have an email in your inbox (if you sent it to yourself) from [email protected] If not, nano “/var/log/mail.log”. I nailed this my first attempt so feel free to send me your logs for troubleshooting.

 

How to Test the Sendmail Command On Linux

To quickly test if the sendmail command is working correctly to then use it for example in shell scripts, via command line or even from PHP scripts (PHP supports sendmail to send emails from PHP scripts – you can set the sendmail path in your php.ini), you can issue the below command on your UNIX or Linux system:

echo "Subject: sendmail test" | sendmail -v my@email.com

my@email.com is obviously the e-mail address you want the test email to be sent to. This sendmail command line example will send a blank email with the subject “sendmail test” to my@email.com if the test is successful. You can also send longer e-mails containing a subject, body and additional headers if you want to, but just to test if sendmail works that’s usually not required. Still, here is how you can do that:

1.) Create a file called mail.txt (or anything you like) in ~/mail.txt with vim or nano or your preferred text editor

2.) Paste the following content to it, but of course adjusting the email addresses, as those are just sendmail command examples:

To: my@email.com
Subject: sendmail test two
From: me@myserver.com
And here goes the e-mail body, test test test..

3.) At last we send the e-mail template we just created with: sendmail -vt < ~/mail.txt

That’s it – you can now test sendmail from command line and even send full e-mails including headers from Linux/UNIX shell. Below is an example of how the simple sendmail test could look like on a live system:

sendmail test

 

 

 

 

Ref.:

https://developernote.com/2017/10/configuring-sendmail-with-gmail-relay-on-ubuntu-16-04/
https://www.johndball.com/configuring-sendmail-for-gmail-relay/

https://tecadmin.net/ways-to-send-email-from-linux-command-line/

https://clients.javapipe.com/knowledgebase/132/How-to-Test-Sendmail-From-Command-Line-on-Linux.html

Ubuntu partition

 

 

swap 32GB (double size of memory)
swap 32GB (double size of memory)
/ 50GB, primary, ex4
/boot 500 MB, primary, ex4, faster if its section is smaller
/var 5GB, logical, ex4
/tmp 5GB, logical, ex4
/home reset of hard disk, logical, ex4

*create a EFI partition(about 300MB), if the BIOS support EFI only or you’ve installed Win8.

 

print out libraries a binary was linked against

ldd binary-exec

Example:

~$ ldd /bin/bash
    linux-gate.so.1 =>  (0x00606000)
    libncurses.so.5 => /lib/libncurses.so.5 (0x00943000)
    libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0x00c5d000)
    libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0x003e9000)
    /lib/ld-linux.so.2 (0x00a41000)
but this won’t work out if you are using ldd on different host.
Then you can try following:

hat it directly needs:

readelf -d APP | grep NEEDED

ldd as mentioned elsewhere will show all direct and indirect libs – everything it needs at runtime. This may not be a complete list, since you may dynamically open things with dlopen(), but this should work 99% of the time.

ld and libtool are used at compile/link time. They aren’t useful once you have an app.

EDIT I can see by later answers you were asking about OSX, but I want to add to my answer on Linux tools:

One thing I forgot to mention, quite a while ago; you asked about versions. Neither ldd nor readelf will answer the “what version” question. They will tell you the filename of the library you are looking for, and the naming convention may have some version info, but nothing enforces this. Symbols may be versioned, and you would have to much about even lower level with nm to see these,

 

Ref.

https://superuser.com/questions/239590/find-libraries-a-binary-was-linked-against