SCP related tips and issue resolve

Today, I had trouble to send a file to remote target board with the command following:


jiafei427@CKUBU:~/tmp$ scp /home/jiafei427/tmp/slrclub_cert.crt root@10.177.247.35:/var/tmp/usr/ck
sh: scp: cannot execute - No such file or directory
lost connection

Damn, had no clue what that meant, and googled like a forever.

used “-vvv” option, but nothing that I can recognize.

 

Finally found solution by myself, and actually that was a bit too easy.

Solution is simply put a “scp” binary into the target board.

scp will use the binary both in client and server side. 😦

(didn’t know that..)

 

Here put more examples about scp:

What is Secure Copy?

scp allows files to be copied to, from, or between different hosts. It uses ssh for data transfer and provides the same authentication and same level of security as ssh.

Examples

Copy the file “foobar.txt” from a remote host to the local host

$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Copy the file “foobar.txt” from the local host to a remote host

$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy the directory “foo” from the local host to a remote host’s directory “bar”

$ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar

Copy the file “foobar.txt” from remote host “rh1.edu” to remote host “rh2.edu”

$ scp your_username@rh1.edu:/some/remote/directory/foobar.txt \
your_username@rh2.edu:/some/remote/directory/

Copying the files “foo.txt” and “bar.txt” from the local host to your home directory on the remote host

$ scp foo.txt bar.txt your_username@remotehost.edu:~

Copy the file “foobar.txt” from the local host to a remote host using port 2264

$ scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy multiple files from the remote host to your current directory on the local host

$ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .
$ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .

scp Performance

By default scp uses the Triple-DES cipher to encrypt the data being sent. Using the Blowfish cipher has been shown to increase speed. This can be done by using option -c blowfish in the command line.

$ scp -c blowfish some_file your_username@remotehost.edu:~

It is often suggested that the -C option for compression should also be used to increase speed. The effect of compression, however, will only significantly increase speed if your connection is very slow. Otherwise it may just be adding extra burden to the CPU. An example of using blowfish and compression:

$ scp -c blowfish -C local_file your_username@remotehost.edu:~

 

 

Ref.

http://www.hypexr.org/linux_scp_help.php

 

 

 

 

 

 

 

 

 

Advertisements

라이브러리 이해하기

유닉스나 리눅스 환경에서 라이브러리는 크게 세가지로 나눌 수 있다.

– 정적 라이브러리
– 공유 라이브러리
– 동적 적재 라이브러리

정적 라이브러리는 .a로 끝나는 파일로 빌드 시 실행 파일에 포함되게 된다. 정적 라이브러리를 사용하는 프로그램을 만들기 위해서는 해당 라이브러리가 extern하는 함수가 선언된 헤더 파일과 해당 라이브러리가 필요하다.

공유 라이브러리는 .so로 끝나는 파일로 빌드 시 실행 파일에 포함되지 않는다. 따라서 여러 프로그램이 공유하는 기능을 공유 라이브러리로 만들 경우 디스크 공간을 아낄수 있고 공유 라이브러리에 버그가 존재할 경우 공유 라이브러리만 재배포하여 문제를 수정할 수 있다. 또한 공유 라이브러리가 이를 사용하는 프로세스에 의해 메모리에 로드되면, 해당 공유 라이브러리를 사용하는 또다른 프로세스가 실행되었을 때 이전 프로세스가 로딩해 놓은 내용을 공유하여 사용하기 때문에 메모리 사용량 또한 절약된다.
공유 라이브러리를 사용하는 프로그램을 만들기 위해서는 해당 라이브러리가 extern하는 함수가 선언된 헤더 파일과 해당 공유 라이브러리가 필요하다. 공유 라이브러리는 실행 파일이 실제로 시스템에서 실행되는 시점에 프로그램 링커 로더가 /lib, /usr/lib, 그리고 LD_LIBRARY_PATH등에서 해당 파일의 존재유무를 확인하고 이를 사용되게 된다. 공유 라이브러리는 앞서 말한바와 같이 링크되는 시점(해당 공유 라이브러리를 사용하는 어플리케이션을 빌드하는 시점)에 공유 라이브러리의 헤더와 라이브러리( .so)파일이 필요하다. 왜 실행파일에 포함되지 않는 라이브러리 파일이 필요한 것인까? 이는 빌드되는 어플리케이션이 실행될 때 찾아야 할 라이브러리의 이름이 공유 라이브러리 파일 안에 적혀있기 때문인데 이를 soname이라고 하고 이 soname은 라이브러리의 파일명하고 다를 수 있으므로 서로 구분하여야 한다. 공유 라이브러리를 컴파일할 때에는 아래와 같이 –fPIC (Position Independent Code)옵션이 필요하다.

gcc –fPIC –c MyLibrary.c
gcc –shared –Wl,-soname libMyLibrary.so.1 –o libMyLibrary.so.1.2.3 MyLibrary.o

동적 적재 라이브러리는 실행 중에 동적으로 로딩되는 라이브러리며, 프로그램 실행시가 아닌 해당 라이브러리에 포함된 함수가 사용되는 시점에 불려와 사용되게 된다. 따라서 동적 라이브러리를 사용하는 프로그램은 첫 실행이 정적 혹은 공유 라이브러리를 사용할 때 보다 빠르다는 장점이 있다. 또한 동적 적재 라이브러리는 실행 시 동적으로 라이브러리에서 직접 정보를 얻어와 실행한다. 따라서 동적 적재 라이브러리를 사용한 어플리케이션을 빌드하는 시점에서는 해당 라이브러리에 대한 헤더 파일이나 심지어 해당 라이브러리 자체도 필요 없다. 리눅스에서는 일반 공유 라이브러리를 동적으로 적재가 가능하므로 공유 라이브러리와 동적 적재 라이브러리가 같다고 할 수 있다. 다만 이를 사용하는 방법으로 구분될 뿐이다. 즉 so파일을 공유 라이브러리도 사용할 수도 동적 적재 라이브러리로 사용할 수도 있으며 동적 적재 라이브러리로 사용하려면 라이브러리를 사용하고자 하는 프로그램에서 dlopen, dlsym등을 사용하여 원하는 심볼을 불러오도록 처리하면 된다.

대부분의 unix/linux 시스템에서 LD_LIBRARY_PATH가 사용 가능하지만 모든 시스템에서 그런 것은 아니다. 리눅스에서는 LD_LIBRARY_PATH 사용이 가능하지만, /etc/ld.so.conf 파일과 ldconfig을 사용하여 /etc/ld.so.cache 파일을 생성하는 방법이 더 권장된다.

또다른 방법으로는 rpath를 지정하는 방법이 있다. rpath는 실행 파일을 compile할때 지정해 주면 된다.

gcc -Wall -o myexefile -Wl,rpath,. main.o -L. -lMyLibrary

위 예제에서는 rpath를 현재 디렉토리 [.]로 설정하였으며 이는 실행할 때 MyLibrary를 찾는 위치에 현재 디렉토리를 추가한 것이다.  -L옵션은 링크시에 MyLibrary를 찾는 위치를 지정하는 옵션으로 역시 [.] 현재 디렉토리로 지정하였다. 굳이 -l 옵션으로 라이브러리를 지정할 필요 없이 경로를 포함한 파일 이름을 사용하여도 된다.

gcc -Wall -o myexefile -Wl,rpath,. main.o ./libMyLibrary.so.1.2.3

 

공유 라이브러리는 보통 아래와 같이 명명된다.

libMyLibrary.so.1.2.3

라이브러리는 용도에 따라 몇가지 서로 다른 이름을 가지고 있는데 혼돈되기 쉬우니 주의가 필요하다.

1. 링커 이름.(linker name)

링커 이름은 라이브러리 파일 명에서 버젼을 나타내는 숫자를 뺀 so까지의 이름이다. 즉 위에서는libMyLibrary.so까지가 linker name이 된다. 링커 이름은 해당 라이브러리를 사용하는 프로그램을 빌드 할 때 사용되는 이름이다. 더 정확히 말하면, 프로그램이 빌드될 때 해당 프로그램이 사용하는 공유 라이브러리를 찾기 위해 링커 이름을 가진 파일이 빌드하는 시스템에 있어야 한다. 예를들어 어떤 어플리케이션이 libMyLibrary.so.1.2.3을 사용하여 컴파일 할 경우 빌드 옵션에 -lMyLibrary를 추가하게 되는데 이때 기본 라이브러리 경로나 혹은 –L 옵션을 사용하여 지정한 디렉토리에 libMyLibrary.so 파일, 즉 링커 이름으로 된 라이브러리 파일이 있어야 한다. (보통은 libMyLibrary.so.1.2.3에 대한 심볼링 링크로 이 파일을 만든다 이렇게 하는 이유는 호스트 개발 환경에 해당 라이브러리의 여러 버젼이 존재할 경우 이를 관리하기 위함이다. ) 크로스 컴파일을 하는 환경에서는 빌드시에만 링커 이름을 가진 파일이 필요하며, 타겟에서 실행 시에는 이 파일이 필요치 않다. 즉 개발하는 호스트 컴퓨터에는 라이브러리 파일 이름이 링커 이름으로 되어 있거나 이를 심볼링 링크로 만들어야 한다. 컴파일러/링커는 링커 이름을 가진 파일에서 이 라이브러리의 soname을 읽어오게 된다. 따라서 공유 라이브러리는 빌드시 실행 파일에 포함되지는 않지만 soname을 읽어와야 하기 때문에 공유 라이브러리가 빌드하는 호스트 시스템에 없으면 빌드가 되지 않는다.

2. soname

이 이름은 로더가 라이브러리를 실행할 때 찾는 이름으로 타겟에는 soname을 가진 파일이 반드시 있어야 한다. 때때로 이 파일은 심볼릭 링크로 다른 파일로 링크되어 있다.

일반적으로 soname은 so뒤에 숫자 하나를 추가하여 만든다.  예를 들어  libMyLibrary.so.1 가 위 라이브러리의 soname으로 적합하다. 아래와 같은 명령으로  soname을 지정할 수 있다.

gcc –shared –Wl,-soname libMyLibrary.so.1 –o libMyLibrary.so.1.2.3 MyLibrary.o

so뒤에 오는 첫번째 숫자는 라이브러리의 Major version number로 일반적으로 호환성이 변경되는 경우에 증가시킨다. 이렇게 라이브러리를 빌드하고 또 이 라이브러리를 사용하는 프로그램을 빌드하면 해당 프로그램 실행 시 로더에 의해서 soname을 가진 파일을 라이브러리 경로에서 찾게 된다. 즉 프로그램이 실행될 때에는 soname을 가진 라이브러리 파일이 필요하다. 크로스 컴파일을 하는 환경에서는 빌드 시에는 soname을 가진 파일이 아닌 링커이름을 가지는 파일(일반적으로 xxx.so로 끝나는 파일)이 필요하고 실행시에는 soname을 가지는 라이브러리(혹은 이에 대한 심볼릭 링크 )가 필요한 것이다.

라이브러리 full name에서 두번째 숫자는 라이브러리의 minor version number로 호환성에는 변경이 없지만 새로운 함수등이 추가되는 경우에 변경되며, 세번째 자리는 release 버전으로 버그 수정 등으로 라이브러리가 새로 릴리즈되는 경우에 변경된다.

따라서 크로스 컴파일 환경에서, 호스트에서 공유 라이브러리 libMyLibrary.so.1.2.3 를 만들어 타겟 시스템에 넣을 경우 타켓 시스템에는 아래와 같이 soft link file을 만들어 soname을 가진 파일을 만들어야 한다.

ln –s libMyLibrary.so.1.2.3 libMyLibrary.so.1
or
mv libMyLibrary.so.1.2.3 libMyLibrary.so.1

또한 해당 라이브러리를 사용하여 프로그램을 만드는 모든 개발자는 자신의 빌드 환경에 아래와 같이 공유 라이브러리가 링커 이름을 가지도록 설정하여야 한다. (즉 .so.1 파일은 호스트(빌드)시스템에서는 필요 없다)

ln –s libMyLibrary.so.1.2.3 libMyLibrary.so
or
mv libMyLibrary.so.1.2.3 libMyLibrary.so

특정 라이브러리의 soname을 확인하기 위해서는 아래와 같은 명령을 사용할 수 있다.

objdump -p libMyLibrary.so.1.2.3 | grep SONAME

더 자세한 내용은…
http://wiki.kldp.org/wiki.php/DocbookSgml/Program-Library-HOWTO#DL-LIBRARIES

 

 

 

 

 

Ref.:

http://dooeui.blogspot.kr/search?updated-max=2009-06-07T08:26:00%2B09:00&max-results=10&start=30&by-date=false

 

 

 

 

How to extract kernel configuration from android kernel or boot.img and build related kernel for the device

To build a new kernel for specified device, you need to make a configuration:

For example, all of configuration fiiles: KCONFIG will be placed at related folders.

When you type the command:

make def_config

or

make menuconfig

to configure the kernel.

Afterall, all the configuration will be written to the one file “.config” at <kernel_source_dir>

 

So if you wanna build a specified kernel for specified device, the simpliest way to get the kernel configuration is to extract it from the original kernel.

Usually, you will not be able to download the kernel image itself.

but it’s easy to get the boot.img which is consisted with kernel and ramdisk images.

 

After you get the boot.img

Decompress it:

$ unmkbootimg -i boot.img
kernel written to ‘kernel‘ (6682776 bytes)
ramdisk written to ‘ramdisk.cpio.gz’ (913311 bytes)

To rebuild this boot image, you can use the command:
mkbootimg –base 0 –pagesize 2048 –kernel_offset 0x80208000 –ramdisk_offset 0x82200000 –second_offset 0x81100000 –tags_offset 0x80200100 –cmdline ‘console=ttyHSL0,115200,n8 androidboot.hardware=flo user_debug=31 msm_rtb.filter=0x3F ehci-hcd.park=3 vmalloc=340M’ –kernel kernel –ramdisk ramdisk.cpio.gz -o boot.img

You will have kernel and ramdisk in your current folder.

 

Then you need a script named extract-ikconfig

you can download it from following link:

https://github.com/torvalds/linux/blob/master/scripts/extract-ikconfig

$ chmod +x extract-ikconfig

$ extract-ikconfig kernel > kernel_config

 

then you can put the kernel_config to the <kernel_source_dir> renaming it with “.config” and compile the kernel for the device.

 

For detailed steps of compiling the kernel, remaking the boot.img and flash to the device, you can follow my previous articles.

 

^___________________^

 

 

 

 

 

find specified target in linux with command

you can filter out messages to stderr. I prefer to redirect them to stdout like this.

#find / -name art  2>&1 | grep -v “Permission denied”

Explanation:

In short, all regular output goes to standard output (stdout). All error messages to standard error (stderr).

grep usually finds/prints the specified string, the -v inverts this, so it finds/prints every string that doesn’t contain “Permission denied”. All of your output from the find command, including error messages usually sent to stderr (file descriptor 2) go now to stdout(file descriptor 1) and then get filtered by the grep command.

#find -type f ./ | xargs grep shit

“-type f ” will also forbid showing the searching target is directory.

 

if you want grep something, following will be pretty useful,

For BSD or GNU grep you can use -B num to set how many lines before the match and -A numfor the number of lines after the match.

grep -B 3 -A 2 foo README.txt

If you want the same number of lines before and after you can use -C num.

grep -C 3 foo README.txt

This will show 3 lines before and 3 lines after.

 

E.g.:

find / | xargs grep aoa 2>&1 | grep -v “Permission denied”

Port forwarding on Linux

SSH:

this is called GatewayPorts in SSH. An excerpt from ssh_config(5):

GatewayPorts
        Specifies whether remote hosts are allowed to connect to local
        forwarded ports.  By default, ssh(1) binds local port forwardings
        to the loopback address.  This prevents other remote hosts from
        connecting to forwarded ports.  GatewayPorts can be used to spec‐
        ify that ssh should bind local port forwardings to the wildcard
        address, thus allowing remote hosts to connect to forwarded
        ports.  The argument must be “yes” or “no”.  The default is “no”.

And you can use localhost instead of M in the forwarding, as you’re forwarding to the same machine as you’re SSH-ing to — if I understand your question correctly.

So, the command will become this:

ssh -g -L 8001:localhost:8000 -f -N user@remote-server.com
OR
ssh -g -L 2222:localhost:8888 -N -o GatewayPorts=yes hostname-of-M

(/usr/bin/ssh -g -vvvvv -L 0.0.0.0:29906:localhost:1337 localhost -N )

and will look like this in netstat -nltp:

tcp        0      0    0.0.0.0:2222   0.0.0.0:*  LISTEN  5113/ssh

Now anyone accessing this machine at port 2222 TCP will actually talk to localhost:8888 as seen in machine M. Note that this is not the same as plain forwarding to port 8888 of M.

OR

he command for forwarding port 80 from your local machine (localhost) to the remote host on port 8000 is:

ssh -R 8000:localhost:80 oli@remote-machine

This requires an additional tweak on the SSH server, add the lines to /etc/ssh/sshd_config:

Match User oli
   GatewayPorts yes

Next, reload the configuration by server executing sudo reload ssh.

The setting GatewayPorts yes causes SSH to bind port 8000 on the wildcard address, so it becomes available to the public address of remote-machine (remote-machine:8000).

If you need to have the option for not binding everything on the wildcard address, change GatewayPorts yes to GatewayPorts clientspecified. Because ssh binds to the loopback address by default, you need to specify an empty bind_address for binding the wildcard address:

ssh -R :8000:localhost:80 oli@remote-machine

The : before 8000 is mandatory if GatewayPorts is set to clientspecified and you want to allow public access to remote-machine:8000.

Relevant manual excerpts:

ssh(1)

-R [bind_address:]port:host:hostport
Specifies that the given port on the remote (server) host is to be forwarded to the given host and port on the local side. This works by allocating a socket to listen to port on the remote side, and whenever a connection is made to this port, the connection is forwarded over the secure channel, and a connection is made to host port hostport from the local machine. By default, the listening socket on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the server’s GatewayPorts option is enabled (see sshd_config(5)).

sshd_config(5)

GatewayPorts
Specifies whether remote hosts are allowed to connect to ports forwarded for the client. GatewayPorts can be used to specify that sshd should allow remote port forwardings to bind to non-loopback addresses, thus allowing other hosts to connect. The argument may be ‘no’ to force remote port forwardings to be available to the local host only, ‘yes’ to force remote port forwardings to bind to the wildcard address, or ‘clientspecified’ to allow the client to select the address to which the forwarding is bound. The default is ‘no’.

See also:

 

Change_IPTABLE:

There is another way. You may set up port forwarding from S:2222 to W:8888 with iptables. Single command:

iptables -t nat -A PREROUTING -p tcp --dport 2222 \
         -j DNAT --to-destination 1.2.3.4:8888

where 1.2.3.4 is M’s IP address. It is called NAT (Network Address Translation).

 

Another Sample:

iptables -t nat -I PREROUTING 1 -p 6 –dport 9222 -j DNAT  –to 10.0.1.1:1337
iptables -t nat -I POSTROUTING 1 -p 6 -d 10.0.1.1 –dport 1337 -j SNAT –to-source 10.0.1.2

 

e.g.:

I would like do some NAT in iptables. So that, all the packets coming to 192.168.12.87 and port 80 will be forwarded to 192.168.12.77 port 80.

 

#!/bin/sh

echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -F
iptables -t nat -F
iptables -X

iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.12.77:80
iptables -t nat -A POSTROUTING -p tcp -d 192.168.12.77 --dport 80 -j SNAT --to-source 192.168.12.87

 

 

 

Using NetCat:

A naive try would be something like this:

$ nc -l 8082 | nc remote_host 80

Yes, it does forward the request from local port 8082 to remote_host:80, but the response is dumped to stdout, not routed back to the client as expected.

Using a named pipe makes it work:

$ mkfifo backpipe
$ nc -l 8082 0<backpipe | nc remote_host 80 1>backpipe

Use tee to get a glimpse of the response through the pipe (I wasn’t able to find a way to dump the request):

$ nc -k -l 8082 0<backpipe | nc localhost 80 | tee backpipe
HTTP/1.1 200 OK
Date: Fri, 30 Sep 2011 22:11:27 GMT
Server: Apache/2.2.16 (Unix)
Last-Modified: Sat, 20 Nov 2004 20:16:24 GMT
ETag: "2d0945-2c-3e9564c23b600"
Accept-Ranges: bytes
Content-Length: 44
Content-Type: text/html

<html><body><h1>It works!</h1></body></html>

The GNU netcat has a different syntax than the stock nc. It also supports different switches.

  1. To listen to port 1234:
    $ netcat -l -p 1234
    
  2. To make bash a server on port 1234:
    $ netcat -l -p 1234 -e /bin/bash
    
  3. Forward local port 8082 to remote port 80:
    $ ./netcat -L 192.168.80.143:80 -p 8082
    
  4. Port forwarding with hex dump:
    $ ./netcat -L 192.168.80.143:80 -p 8082 -x
    Received 174 bytes from the socket
    00000000  47 45 54 20  2F 20 48 54  54 50 2F 31  2E 31 0D 0A  GET / HTTP/1.1..
    00000010  55 73 65 72  2D 41 67 65  6E 74 3A 20  63 75 72 6C  User-Agent: curl
    00000020  2F 37 2E 32  31 2E 30 20  28 78 38 36  5F 36 34 2D  /7.21.0 (x86_64-
    00000030  72 65 64 68  61 74 2D 6C  69 6E 75 78  2D 67 6E 75  redhat-linux-gnu
    00000040  29 20 6C 69  62 63 75 72  6C 2F 37 2E  32 31 2E 30  ) libcurl/7.21.0
    00000050  20 4E 53 53  2F 33 2E 31  32 2E 31 30  2E 30 20 7A   NSS/3.12.10.0 z
    00000060  6C 69 62 2F  31 2E 32 2E  35 20 6C 69  62 69 64 6E  lib/1.2.5 libidn
    00000070  2F 31 2E 31  38 20 6C 69  62 73 73 68  32 2F 31 2E  /1.18 libssh2/1.
    00000080  32 2E 34 0D  0A 48 6F 73  74 3A 20 36  37 2E 31 33  2.4..Host: 67.13
    00000090  30 2E 36 39  2E 31 34 33  3A 38 30 38  32 0D 0A 41  0.69.143:8082..A
    000000A0  63 63 65 70  74 3A 20 2A  2F 2A 0D 0A  0D 0A        ccept: */*....  
    Sent 276 bytes to the socket
    00000000  48 54 54 50  2F 31 2E 31  20 32 30 30  20 4F 4B 0D  HTTP/1.1 200 OK.
    00000010  0A 44 61 74  65 3A 20 46  72 69 2C 20  33 30 20 53  .Date: Fri, 30 S
    00000020  65 70 20 32  30 31 31 20  32 32 3A 33  32 3A 30 35  ep 2011 22:32:05
    00000030  20 47 4D 54  0D 0A 53 65  72 76 65 72  3A 20 41 70   GMT..Server: Ap
    00000040  61 63 68 65  2F 32 2E 32  2E 31 36 20  28 55 6E 69  ache/2.2.16 (Uni
    00000050  78 29 0D 0A  4C 61 73 74  2D 4D 6F 64  69 66 69 65  x)..Last-Modifie
    00000060  64 3A 20 53  61 74 2C 20  32 30 20 4E  6F 76 20 32  d: Sat, 20 Nov 2
    00000070  30 30 34 20  32 30 3A 31  36 3A 32 34  20 47 4D 54  004 20:16:24 GMT
    00000080  0D 0A 45 54  61 67 3A 20  22 32 64 30  39 34 35 2D  ..ETag: "2d0945-
    00000090  32 63 2D 33  65 39 35 36  34 63 32 33  62 36 30 30  2c-3e9564c23b600
    000000A0  22 0D 0A 41  63 63 65 70  74 2D 52 61  6E 67 65 73  "..Accept-Ranges
    000000B0  3A 20 62 79  74 65 73 0D  0A 43 6F 6E  74 65 6E 74  : bytes..Content
    000000C0  2D 4C 65 6E  67 74 68 3A  20 34 34 0D  0A 43 6F 6E  -Length: 44..Con
    000000D0  74 65 6E 74  2D 54 79 70  65 3A 20 74  65 78 74 2F  tent-Type: text/
    000000E0  68 74 6D 6C  0D 0A 0D 0A  3C 68 74 6D  6C 3E 3C 62  html....<html><b
    000000F0  6F 64 79 3E  3C 68 31 3E  49 74 20 77  6F 72 6B 73  ody><h1>It works
    00000100  21 3C 2F 68  31 3E 3C 2F  62 6F 64 79  3E 3C 2F 68  !</h1></body></h
    00000110  74 6D 6C 3E                                         tml>           
    

CKDLEE Simply say, it’ll be like this:

On the server:
mkfifopipe_name
nc -l -pport_number<pipe_name  | program_name>pipe_name

On the client:
nc server_machine_name port_number

1. <“while true” should be added in shell>
jiafei427@CKUBU:~/platform_external_netcat-master$ mkfifo nidie
jiafei427@CKUBU:~/platform_external_netcat-master$ netcat -v -l 8888 0<nidie | netcat -v 127.0.0.1 9999 1>nidie
Listening on [0.0.0.0] (family 0, port 8888)
Connection to 127.0.0.1 9999 port [tcp/*] succeeded!
Connection from [127.0.0.1] port 8888 [tcp/*] accepted (family 2, sport 48926)
wocao
nidaye

2.
jiafei427@CKUBU:~/platform_external_netcat-master$ netcat 127.0.0.1 8888
wocao
nidaye

 

Ref.:

http://askubuntu.com/questions/50064/reverse-port-tunnelling

http://www.xinotes.net/notes/note/1529/

http://superuser.com/questions/607783/can-i-pipe-redirect-a-console-application-through-netcat-so-it-can-be-used-remot

Linux bash tiny-tips

Killin’ the process via its id in Linux:
#ps -aux | grep IVI_ConnectionManager | awk ‘NR==1{print $2}’ | xargs kill

 

Launch two different shell in one script:

#/bin/bash

command_1 && command_2

(Notice the & sign at the end of each line. This will cause the shell to fork that process into the background and continue execution. Note how it’s different from &&, which is basically an and sign, command_1 && command_2 will executecommand_1 and if it exits with success, only then run command_2, while command_1 & command_2 will start the second right after the first.)

 

Get input repeatedly from the shell:

Value 127 (non-zero) indicates command cyberciti failed to execute. You can use exit status in shell scripting too. You can store result of exit status in variable. Consider following shell script:

#!/bin/bash
echo -n "Enter user name : "
read USR
cut -d: -f1 /etc/passwd | grep "$USR" > /dev/null
OUT=$?
if [ $OUT -eq 0 ];then
   echo "User account found!"
else
   echo "User account does not exists in /etc/passwd file!"
fi

Save and execute the script as follows:
$ chmod +x script.sh
$ ./script.sh

Output:

Enter user name : jradmin
User account does not exists in /etc/passwd file

Try it one more time:
$ ./script.sh
Output:

Enter user name : vivek
User account found

 

you can filter out messages to stderr. I prefer to redirect them to stdout like this.

find / -name art  2>&1 | grep -v “Permission denied”

Explanation:

In short, all regular output goes to standard output (stdout). All error messages to standard error (stderr).

grep usually finds/prints the specified string, the -v inverts this, so it finds/prints every string that doesn’t contain “Permission denied”. All of your output from the find command, including error messages usually sent to stderr (file descriptor 2) go now to stdout(file descriptor 1) and then get filtered by the grep command.

This assumes you are using the bash/sh shell.

Under tcsh/csh you would use

find / -name art |& grep ….

 

Shell:

jiafei427@CKUBU:~/workspace/kernel/modules/test$ cat infinity_touch.sh
i=0
while [ 1 ]
do
echo “wocao”
sleep 0.01
i=i+1
done

That will repeat print wocao with sleeping 0.01 second

 

create folder with certain format of date

mkdir "$(date +"%d-%m-%Y")"
cd "$(date +"%d-%m-%Y")"

In the extreme case a day passes between the first and the second statement, that won’t work. Change it to:

d="$(date +"%d-%m-%Y")"
mkdir "$d"
cd "$d"

Explanation: The $(...) returns the output from the subcommands as a string, which we store in the variable d.

 

Getting shell variables into awk may be done in several ways. Some are better than others.


This is the best way to do it. It uses the -v option: (P.S. use a space after -v or it will be less portable. E.g., awk -v var= not awk -vvar)

variable="line one\nline two"
awk -v var="$variable" 'BEGIN {print var}'
line one
line two

This should be compatible with most awk and variable is available in the BEGIN block as well:

 

 

Read full link from symbolic link file:

readlink -f `which command`

If command is in your $PATH variable , otherwise you need to specify the path you know.

You can use awk with a system call readlink to get the equivalent of an ls output with full symlink paths. For example:

ls | awk '{printf("%s ->", $1); system("readlink -f " $1)}'

Will display e.g.

thin_repair ->/home/user/workspace/boot/usr/bin/pdata_tools
thin_restore ->/home/user/workspace/boot/usr/bin/pdata_tools
thin_rmap ->/home/user/workspace/boot/usr/bin/pdata_tools
thin_trim ->/home/user/workspace/boot/usr/bin/pdata_tools
touch ->/home/user/workspace/boot/usr/bin/busybox
true ->/home/user/workspace/boot/usr/bin/busybox

 

awk  system output to certain variable:

To run a system command in awk you can either use system() or cmd | getline.

I prefer cmd | getline because it allows you to catch the value into a variable:

$ awk 'BEGIN {"date" |  getline mydate; close("date"); print "returns", mydate}'
returns Thu Jul 28 10:16:55 CEST 2016

More generally, you can set the command into a variable:

awk 'BEGIN {
       cmd = "date -j -f %s"
       cmd | getline mydate
       close(cmd)
     }'

Note it is important to use close() to prevent getting a “makes too many open files” error if you have multiple results (thanks mateuscb for pointing this out in comments).

 

Note: Coprocess is GNU awk specific. Anyway another alternative is using getline

cmd = "strip "$1
while ( ( cmd | getline result ) > 0 ) {
  print  result
} 
close(cmd)
or sth. like this:
awk 'BEGIN{"date"|getline d; print "Current date is:" , d }'

 

list all symbolic links in a directory:

Parsing ls is a Bad Idea®, prefer a simple find in that case:

find . -type l -ls

To only process the current directory:

find . -maxdepth 1 -type l -ls

 

 

How to find and list all the symbolic links created for a particular file?

Here is an example:

find -L /dir/to/start -xtype l -samefile ~/Pictures

or, maybe better:

find -L /dir/to/start -xtype l -samefile ~/Pictures 2>/dev/null

to get rid of some errors like Permission deniedToo many levels of symbolic links, or File system loop detected which find throws them when doesn’t have the right permissions or other situations.

  • -L – Follow symbolic links.
  • -xtype l – File is symbolic link
  • -samefile name – File refers to the same inode as name. When -L is in effect, this can include symbolic links.

 

 

 

1. Replacing all occurrences of one string with another in all files in the current directory:

These are for cases where you know that the directory contains only regular files and that you want to process all non-hidden files. If that is not the case, use the approaches in 2.

All sed solutions in this answer assume GNU sed. If using FreeBSD or OS/X, replace -i with -i ''. Also note that the use of the -i switch with any version of sed has certain filesystem security implications and is inadvisable in any script which you plan to distribute in any way.

  • Non recursive, files in this directory only:
    sed -i -- 's/foo/bar/g' *
    perl -i -pe 's/foo/bar/g' ./* 

    (the perl one will fail for file names ending in | or space)).

  • Recursive, regular files (including hidden ones) in this and all subdirectories
    find . -type f -exec sed -i 's/foo/bar/g' {} +

    If you are using zsh:

    sed -i -- 's/foo/bar/g' **/*(D.)

    (may fail if the list is too big, see zargs to work around).

    Bash can’t check directly for regular files, a loop is needed (braces avoid setting the options globally):

    ( shopt -s globstar dotglob;
        for file in **; do
            if [[ -f $file ]] && [[ -w $file ]]; then
                sed -i -- 's/foo/bar/g' "$file"
            fi
        done
    )

    The files are selected when they are actual files (-f) and they are writable (-w).

2. Replace only if the file name matches another string / has a specific extension / is of a certain type etc:

  • Non-recursive, files in this directory only:
    sed -i -- 's/foo/bar/g' *baz*    ## all files whose name contains baz
    sed -i -- 's/foo/bar/g' *.baz    ## files ending in .baz
  • Recursive, regular files in this and all subdirectories
    find . -type f -name "*baz*" -exec sed -i 's/foo/bar/g' {} +

    If you are using bash (braces avoid setting the options globally):

    ( shopt -s globstar dotglob
        sed -i -- 's/foo/bar/g' **baz*
        sed -i -- 's/foo/bar/g' **.baz
    )

    If you are using zsh:

    sed -i -- 's/foo/bar/g' **/*baz*(D.)
    sed -i -- 's/foo/bar/g' **/*.baz(D.)

    The -- serves to tell sed that no more flags will be given in the command line. This is useful to protect against file names starting with -.

  • If a file is of a certain type, for example, executable (see man find for more options):
    find . -type f -executable -exec sed -i 's/foo/bar/g' {} +

    zsh:

    sed -i -- 's/foo/bar/g' **/*(D*)

3. Replace only if the string is found in a certain context

  • Replace foo with bar only if there is a baz later on the same line:
    sed -i 's/foo\(.*baz\)/bar\1/' file

    In sed, using \( \) saves whatever is in the parentheses and you can then access it with \1. There are many variations of this theme, to learn more about such regular expressions, see here.

  • Replace foo with bar only if foo is found on the 3d column (field) of the input file (assuming whitespace-separated fields):
    gawk -i inplace '{gsub(/foo/,"baz",$3); print}' file

    (needs gawk 4.1.0 or newer).

  • For a different field just use $N where N is the number of the field of interest. For a different field separator (: in this example) use:
    gawk -i inplace -F':' '{gsub(/foo/,"baz",$3);print}' file

    Another solution using perl:

    perl -i -ane '$F[2]=~s/foo/baz/g; $" = " "; print "@F\n"' foo 

    NOTE: both the awk and perl solutions will affect spacing in the file (remove the leading and trailing blanks, and convert sequences of blanks to one space character in those lines that match). For a different field, use $F[N-1] where N is the field number you want and for a different field separator use (the $"=":" sets the output field separator to :):

    perl -i -F':' -ane '$F[2]=~s/foo/baz/g; $"=":";print "@F"' foo 
  • Replace foo with bar only on the 4th line:
    sed -i '4s/foo/bar/g' file
    gawk -i inplace 'NR==4{gsub(/foo/,"baz")};1' file
    perl -i -pe 's/foo/bar/g if $.==4' file

4. Multiple replace operations: replace with different strings

  • You can combine sed commands:
    sed -i 's/foo/bar/g; s/baz/zab/g; s/Alice/Joan/g' file

    Be aware that order matters (sed 's/foo/bar/g; s/bar/baz/g' will substitute foo with baz).

  • or Perl commands
    perl -i -pe 's/foo/bar/g; s/baz/zab/g; s/Alice/Joan/g' file
  • If you have a large number of patterns, it is easier to save your patterns and their replacements in a sed script file:
    #! /usr/bin/sed -f
    s/foo/bar/g
    s/baz/zab/g
  • Or, if you have too many pattern pairs for the above to be feasible, you can read pattern pairs from a file (two space separated patterns, $pattern and $replacement, per line):
    while read -r pattern replacement; do   
        sed -i "s/$pattern/$replacement/" file
    done < patterns.txt
  • That will be quite slow for long lists of patterns and large data files so you might want to read the patterns and create a sed script from them instead. The following assumes a <space>delimiter separates a list of MATCH<space>REPLACE pairs occurring one-per-line in the file patterns.txt :
    sed 's| *\([^ ]*\) *\([^ ]*\).*|s/\1/\2/g|' <patterns.txt |
    sed -f- ./editfile >outfile

    The above format is largely arbitrary and, for example, doesn’t allow for a <space> in either of MATCH or REPLACE. The method is very general though: basically, if you can create an output stream which looks like a sed script, then you can source that stream as a sed script by specifying sed‘s script file as -stdin.

  • You can combine and concatenate multiple scripts in similar fashion:
    SOME_PIPELINE |
    sed -e'#some expression script'  \
        -f./script_file -f-          \
        -e'#more inline expressions' \
    ./actual_edit_file >./outfile

    A POSIX sed will concatenate all scripts into one in the order they appear on the command-line. None of these need end in a \newline.

  • grep can work the same way:
    sed -e'#generate a pattern list' <in |
    grep -f- ./grepped_file
  • When working with fixed-strings as patterns, it is good practice to escape regular expression metacharacters. You can do this rather easily:
    sed 's/[]$&^*\./[]/\\&/g
         s| *\([^ ]*\) *\([^ ]*\).*|s/\1/\2/g|
    ' <patterns.txt |
    sed -f- ./editfile >outfile

5. Multiple replace operations: replace multiple patterns with the same string

  • Replace any of foobar or baz with foobar
    sed -Ei 's/foo|bar|baz/foobar/g' file
  • or
    perl -i -pe 's/foo|bar|baz/foobar/g' file

 

 

skip line containing certain expression


#cat file_name | awk '$0 ~ /Makefile$/'

that will print out the line without line ending with Makefile

 

To remove the line and print the output to standard out:

sed '/pattern to match/d' ./infile

To directly modify the file:

sed -i '/pattern to match/d' ./infile

To directly modify the file (and create a backup):

sed -i.bak '/pattern to match/d' ./infile

For Mac OS X users:

sed -i '' '/pattern/d' ./infile

 

Print Text files only without binary files:

I know this is an old thread, but I stumbled across it and thought I’d share my method which I have found to be a very fast way to use find to find only non-binary files:

find . -type f -exec grep -Iq . {} \; -and -print

The -I option to grep tells it to immediately ignore binary files and the . option along with the -q will make it immediately match text files so it goes very fast. You can change the -print to a -print0 for piping into an xargs -0 or something if you are concerned about spaces (thanks for the tip, @lucas.werkmeister!)

Also the first dot is only necessary for certain BSD versions of find such as on OS X, but it doesn’t hurt anything just having it there all the time if you want to put this in an alias or something.

 

Replace certain string for all files in a directory:

# find ./ -type f -exec sed -i ‘s/string1/string2/g’ {} \;

 

…TBD

Linux tiny Tips

(for myself…)

Transfer file to the destination pc:

option 1:

#scp mylocalfile.txt root@<destination_ip>:/recipient/directory/

option 2:

A simple option is to use netcat (nc). This is particularly useful on stripped down Linux systems where services like ssh and ftp are turned off.

On destination machine run the following command: nc -l -p 1234 > out.file

On source machine run the following command: nc -w 3 <dest-ip-adr> 1234 < out.file

For more details look, for example, here.

There are also netcat implementations for Windows, e.g. ncat.

 

How to Linux Terminal Split Screen With Screen:

pre-requirement:

#sudo apt-get install screen

How to split screen
a)Split the Window
Horizontally
Ctrl + a, Then Press Shift + s
or
Vertically
Ctrl + a, Then Press Shift + \

b)Switch between spilted windows
Ctrl + a, Then Press Tab
or
Ctrl + a, Then Type :focus
* Here :focus is a command

c)In the spited window use following command to open existing session
Ctrl + a, Then Press 0-9
or
Ctrl + a, Then Press n or p
or
Ctrl + a, Then Press Shift + ‘
or
Ctrl + a, Then Presss c

d)Resize a splitted window/region
Ctrl + a, Then Type :resize 25
* Here :resize is a command

e)Remove current splitted window/region
Ctrl + a, Then Type :remove
* Here :remove is a command
or
Ctrl + a, Then Press Shift + x

f)Remove all spiltted windows/regions except the current one.
Ctrl + a, Then Type :only
* Here :only is a command
or
Ctrl + a, Then Press Shift +q

g) Close the screen and all regions
Ctrl + a, Then Press \

Screen Terminal Multiplexer Commands

 

Editor:

VIM,eMacs, Sublime-text

Terminal:

Guake Terminal

Connection & File Transfer:

SecureCRT,  SSH, putty, FileZilla

 

Run Program in background and save the log to the file:

Redirect the output to a file like this:

./a.sh > somefile 2>&1 &

This will redirect both stdout and stderr to the same file. If you want to redirect stdout and stderr to two different files use this:

./a.sh > stdoutfile 2> stderrfile &

 

 

Task: List or display loaded modules

Open a terminal or login over the ssh session and type the following command
$ less /proc/modules
Sample outputs:

sha1_generic 1759 4 - Live 0xffffffffa059e000
arc4 1274 2 - Live 0xffffffffa0598000
ecb 1841 2 - Live 0xffffffffa0592000
ppp_mppe 5240 2 - Live 0xffffffffa058b000
ppp_async 6245 1 - Live 0xffffffffa0584000
crc_ccitt 1323 1 ppp_async, Live 0xffffffffa057e000
ppp_generic 19291 6 ppp_mppe,ppp_async, Live 0xffffffffa0572000
slhc 4003 1 ppp_generic, Live 0xffffffffa056c000
ext3 106854 1 - Live 0xffffffffa0546000
jbd 37349 1 ext3, Live 0xffffffffa0533000
sha256_generic 8692 2 - Live 0xffffffffa0525000
aes_x86_64 7340 2 - Live 0xffffffffa0517000
aes_generic 25714 1 aes_x86_64, Live 0xffffffffa050b000
....
...
....
ahci 32950 20 - Live 0xffffffffa007b000
libata 133824 3 ata_generic,pata_jmicron,ahci, Live 0xffffffffa0045000
scsi_mod 126901 3 usb_storage,sd_mod,libata, Live 0xffffffffa0012000
thermal 11674 0 - Live 0xffffffffa0009000
thermal_sys 11942 3 video,processor,thermal, Live 0xffffffffa0000000

To see nicely formatted output, type:
$ lsmod
Sample outputs:

Module                  Size  Used by
sha1_generic            1759  4
arc4                    1274  2
ecb                     1841  2
ppp_mppe                5240  2
ppp_async               6245  1
crc_ccitt               1323  1 ppp_async
ppp_generic            19291  6 ppp_mppe,ppp_async
slhc                    4003  1 ppp_generic
.................

First column is Module name and second column is the size of the modules i..e the output format is module name, size, use count, list of referring modules.

Finding more info about any module or driver

Type the following command:
# modinfo driver-Name-Here
# modinfo thermal_sys
# modinfo e1000e

Sample outputs:

filename:       /lib/modules/2.6.32-5-amd64/kernel/drivers/net/e1000e/e1000e.ko
version:        1.2.20-k2
license:        GPL
description:    Intel(R) PRO/1000 Network Driver
author:         Intel Corporation, <linux.nics@intel.com>
srcversion:     AB58ACECA1618E521F58503
alias:          pci:v00008086d00001503sv*sd*bc*sc*i*

 

Disk Utility:

#du -sh file_path

Explanation

du command estimates file_path space usage
The options -sh are (from man du):

-s, –summarize
display only a total for each argument

-h, –human-readable
print sizes in human readable format (e.g., 1K 234M 2G)
To check more than one directory and see the total, use du -sch:
-c, –total
produce a grand total

You could extend this command to:

#du -h –max-depth=1 | sort -hr

which will give you the size of all sub-folders (level 1). The output will be sorted (largest folder on top).

Print all usb devices:

#cat /proc/bus/input/devices

OR
#cat /proc/bus/usb/devices

Grep tip:

For BSD or GNU grep you can use -B num to set how many lines before the match and -A num for the number of lines after the match.

#grep -B 3 -A 2 foo README.txt

If you want the same number of lines before and after you can use -C num.

#grep -C 3 foo README.txt

This will show 3 lines before and 3 lines after.

 

REF.

http://stackoverflow.com/questions/15807122/telnet-file-transfer-between-two-linux-machines

http://fosshelp.blogspot.kr/2014/02/how-to-linux-terminal-split-screen-with.html

How to Split Terminal Screen in Linux Ubuntu 14.04

http://www.cyberciti.biz/faq/howto-display-list-of-modules-or-device-drivers-in-the-linux-kernel/