Pages

Affichage des articles dont le libellé est linux. Afficher tous les articles
Affichage des articles dont le libellé est linux. Afficher tous les articles

lundi 14 octobre 2013

use Uniq command

I wanted to remove the lines which contains certain repeated characters. For example, my file is:

0  name0 2011 station
1  name1 2012 station
2  name2 2012 station
3  name3 2013 station

what I want to have is:
0  name0 2011 station

2  name2 2012 station
3  name3 2013 station

I used the command uniq:
cat file | uniq -f 2 
This command ignores the first 2 fields in the file and only evaluate the remaining fields.

There are also other interesting options of uniq:
-w: This option restricts comparison to first specified ‘N’ characters only. For this example, use the following test2 input file.

-s:  This option skips comparison of first specified ‘N’ characters. For this example, use the following test3 input file.



jeudi 29 août 2013

mount a new partition using fstab

I cleaned old partitions in my Ubuntu12.04 by using the gparted.
1) for unuseful partition: I chose 'delete' in the gparted interface.
   then choose the 'appy' to realise this step. This steps gives unallocated spaces.
3) then I allocate spaces for new partitions.
    In my case, I give a swap of 4G (extended, linux-swap format) and give 200 G to a new partition (primary and ext4 format). This produces "new partition #1" and "new partition #2".
    Then click on the "apply" (or an symbol of ok), the new partitions are made, and the gparted gives the names of  sda2, sda3

For the /dev/sda3 it is a file system that I hope to mount it automatically each time. So I need modify the /etc/fstab.
1) cp /etc/fstab  /etc/fstab.backup
2) To list your devices by UUID use blkid:  sudo bldid
    save the UUID of the /dev/sda3
3) sudo emacs /etc/fstab:
UUID=myUUID   /media/data    ext4    users,defaults  0    2
(https://help.ubuntu.com/community/Fstab for more information)
/media/data is the mounting point of the /dev/sda3.
users,defaults: are the options
4) restart the computer and type: df
the file sytem on the /dev/sda3 can be mounted. The root have the rwe permission and other users have limited rights in this example.
5) if want to give a user the read/write permission:
need modify the fstab by adding uid=1000,gid=1000 (if this is the uid and gid of the user).
6) if want to give all the current users the write permission:
it seems need to create a group, and then add the current users in the group, and then give the read/write permission of this filesystem to this new group.


Go to recovery mode for changing passwd of a user

If forgot the password for a user (esp an adminstator), this is what I do. My system is Ubuntu 12.04.

1) start computer
2) push the left Shift button very quicky even before the message 'touch ESC for entering the BIOS' appears, such that we can get the grub interface
3) choose the recovery mode of my linux system
4) Then there appears several choices: "resume"... "root".., choose the choice of "root"
5) type: mount -rw -o remont /
to give the read and write permision
6)type: passwd myusername
then give the new passwords for twice
7)type: exit
8) the window of 4) appears again, then choose "resume"
then it is ok.

jeudi 8 août 2013

nftp and lftp

For downloading directories/subdirectories of files from remote server, I tried:
1) ncftp remove_server
    get -R directory
   
     But I get error message: Could not traverse directory: could not parse extended file or directory information. Perhaps it is because that -R option only works if the remote server is unix.

 2) Then I tried lftp for complex file transfer and it works well.
    lftp   -e  'mirror /pub/data/npt   . '     ftp://example.ftp.site
    -e: the downloading command

lundi 20 mai 2013

reinstall Kubuntu 12.04

I had Kubuntu 9.10 installed in my computer for a long time. Recently  I want to update to the newer version. But it  seems not possible to upgrade from 9.10 to 12.04. So I decided to reinstall the system. Kubuntu 12.04 is selected because it is a stable version compared to 12.12 and it works for 5 years.

A summary of my computer and the list of partitions:
0) I have 1T total disk drive
1) I have 300 G of data files in the /home/ which is in the partition /dev/sda1
2) I have 8 G for swap
3) the current /dev/sda1 includes the root /, boot (/boot), and home (/home); and the /home is in the same partition as root. The sda1 takes 800 G, but only 300G used.

Before doing anything, I backup everything in the /home directory to an external hard drive.

Then I use the partition tool in my existing linux system to reduce the sda1 from 800 G to 400 G.

Then I burned the Kubuntu 12.04 image to a DVD reader (only writable for once) which is cheap. I tried the usb installation, but it not easy and not working well.

I decide to install the Kubuntu in the new partition than in sda1, so that later I can copy all the old home files to the new partition/installation. Afterwards I can remove the old partition and combine it with the new one.

I decide to delete the old swap and then make 3 new partitions:
1) /:  20G
2) swap: 8G
3) /home: the rest unused space

Then I started the installation by inserting the dvd into the computer.
1) I keep the old partition sda1, but do not use it
2) create new partition sda2 in the ext4 type for the root / (primary), because the kubuntu 12.04 recommend the ext4. I choose to format this partition.
3) create a new partition sda4 (primary) in the ext4 type for /home, so that if nexetime I need update/reinstall the system, I will not need to modify the /home. I choose to format this partition
4) create a new partition sda5  for swap(logical)
5) I choose the /dev/sda  as the boot loader
6) then click on 'install'
7) choose the lanuangue, timezone, keyboard layout.

About 15 minutes later the installation is finished. I restarted the system, and created the user names as before.

When I log in, in the file system, I can see that:
1) 500 G for the new root and home using Kubuntu 12.04
2) old installation 400 G
3) swap

Now I copy the old home files to the new home file.

After copying all the old home files, I decide to reformat the old sda1 partition and combine it with the new one (at the same time, leaving a few GB free as extended ). 

samedi 2 février 2013

terminal prompt directory length


  • PS1: The default promt you see when you open a shell
    It's value is stored in an environment variable called PS1. To see its value, type
    echo $PS1
    This will give you something like
    \[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$
    
    To change it, you can set a new value for the variable:
    export PS1="\u > "
    
    This will result in a prompt like this:
    stefano > 
    
  • PS2: is your secondary prompt. This get's shown when a command is not finished. Type echo "asd and hit enter, the secondary prompt will let you enter more lines until you close the inverted commas.
  • PS3 is the prompt used for select(2)
  • PS4 is the prompt used for alt text stack traces (default: +)
To make the changes permanent, you append them to the end of .bash_profile (or .bashrc, see this question) in your home directory.
Here's a more or less complete list of shorthand that you can use when composing these:
  • \a     The 'bell' charakter
  • \A     24h Time
  • \d     Date (e.g. Tue Dec 21)
  • \e     The 'escape' charakter
  • \h     Hostname (up to the first ".")
  • \H     Hostname
  • \j     No. of jobs currently running (ps)
  • \l     Current tty
  • \n     Line feed
  • \t     Time (hh:mm:ss)
  • \T     Time (hh:mm:ss, 12h format)
  • \r     Carriage return
  • \s     Shell (i.e. bash, zsh, ksh..)
  • \u     Username
  • \v     Bash version
  • \V     Full Bash release string
  • \w     Current working directory
  • \W     Last part of the current working directory
  • \!     Current index in history
  • \#     Command index
  • \$     A "#" if you're root, else "$"
  • \\     Literal Backslash
  • \@     Time (12h format with am/pm)
You can of course insert any literal string, and any command:
export PS1="\u \$(pwd) > "
Where $(pwd) stands in place of "the output of" pwd.
  • If the command substitution is escaped, as in \$(pwd), it's evaluated every time the prompt is displayed, otherwise, as in $(pwd), it's only evaluated once when bash is started.
If you want your prompt to feature colours, you can use bash's colour codes to do it. The code consists of three parts:
40;33;01
  • The first part before the semicolon represents the text style.
    • 00=none
    • 01=bold
    • 04=underscore
    • 05=blink
    • 07=reverse
    • 08=concealed
  • The second and third part are the colour and the background color:
    • 30=black
    • 31=red
    • 32=green
    • 33=yellow
    • 34=blue
    • 35=magenta
    • 36=cyan
    • 37=white
Each part can be omitted, assuming starting on the left. i.e. "1" means bold, "1;31" means bold and red. And you would get your terminal to print in colour by escaping the instruction with \33[ and ending it with an m. 33, or 1B in hexadecimal, is the ascii sign "ESCAPE" (a special character in the ascii character set). Example:
"\33[1;31mHello World\33[m"
Prints "Hello World" in bright red.

mercredi 30 janvier 2013

grep multiple-lines with keywords and the memory problem

1) In order to get several lines before or after the matching keyword. GREP with the option of -A -B can well do with it.  -A for after and -B for before.

For example, to find the text which are  5 lines before the line of keyword, and 20 lines after the keyword:

set keyword = "Distance"
grep -B 5  -A 20   $keyword    filename

2) if we use grep for matching/processing files, sometime we find that we use a lot of memories. This is true when we have grep over a large disk containing a lot of files. From the debian website, I found some useful information:


grep uses a DFA algorithm to perform regexp matching. This DFA algorithm
is either implemented in grep, or in the libc (when re_search is used).
Which DFA algorithm is used depends on the version of grep and on the grep
options.

The DFA algorithm is a state machine and each time it is used, the
automaton which represents the regexp may use more memory because a new
transition is investigated.
The memory allocated for the automaton is not freed after each line is
parsed, but it is kept so that if a transition path is also used in a
later line, the processing will be faster. Thus more and more memory will be used.

To reduce the usage of memory, we can use the option -F. 

lundi 5 novembre 2012

list/find/scp only files or only directories

----- to find only the files, not any directory:
ls -p | grep -v /

----- to copy only the files in the current path to a server of distance , not copying any directory
set files =  `ls -p | grep -v / `
scp $files user@servername:path2copy/

----- to find only directories, not any file
ls -p | grep /



vendredi 25 mai 2012

paste: concatenate columns to form a new file

If file1:
0.0066
0.0357
0.0475
0.0609
0.0370
0.0443

file2:
0.0181
0.0176
0.0155
0.0152
0.0131
0.0137

Then use the linux command "paste file1 file2 ", we can concatenate two columns as follows :

0.0066 0.0181
0.0357 0.0176
0.0475 0.0155
0.0609 0.0152
0.0370 0.0131
0.0443 0.0137

by default the delimiter is Tab. If we want to define our own delimiter, we can :

paste -d " " file1 file2 

This will use one-space as the delimiter between the two.


If the two files have multiple lines, it seems to me that paste has difficulty to merge them correctly. In a case I have two files and each have 3 columns. Finally what it works well is:

pr  -t  -m   file1  file2

lundi 2 avril 2012

comm: compare two sorted text files

man comm:
  
Compare sorted files FILE1 and FILE2 line by line.

       With  no  options,  produce  three-column output.  Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files.

       -1     suppress lines unique to FILE1

       -2     suppress lines unique to FILE2

       -3     suppress lines that appear in both files

jeudi 15 mars 2012

test if a string contains a substring

1)
set exist_substring = `echo $string | grep -c $substring`
if ( $exist_substring == 1 ) then
      echo "$substring exists in the $string"
endif



2) ---to test if str exists in $mystring:
echo $mystring | awk '{print index($0, "str")}'" searches the $mystring variable for an occurrence of "str" in the string's value.

if exists, return 1, otherwise 0

mardi 13 mars 2012

copy old files to a new dir

1)  The idea of the script:
We would like to find the files modified some days ago, and mv them to another server.
---- to find the files and directories modified between $3 days ago  and $4 days ago
-----copy to a new server
-----remove those found files from the old server


2) Attentions
   a) to find the files/directories and remove them
       to remove directories
      find . -type d -mtime $i -exec rm -rf {} \; 
      to remove files
      find . -type f -mtime $i -exec rm -f {} \;  
      There must be \ and ;

   b) in csh if we define a variable by:
       set foundfiles = `find . -type d -mtime $i`
       This variable $foundfiles could contain the array.
   c) In csh, it does not work if we define a new variable by :
       set b = $foundfiles 
      will not produce the same string as $foundfiles !!!
    d) The following does not work in csh
       if ( $foundfiles != "") then
          somthing
       endif
     It will produce "if:Expression Syntax" error, because it has difficulty to deal with a variable with array. (see http://www.grymoire.com/Unix/Csh.html also for some insights).
     e ) What works is:
      if ( `find . -type d -mtime $i` != "") then
        something
      endif


3 ) Here is the complete code that I use to copy old files from one server to another.
# Input:
# ---1) the directory where we want to find the folders  in the old server
# ---2) the directory where we want to move the files in the new server
# ---3) the initial date xx when the files are modified xx days ago
# ---4) the final date xx when the files are modified xx days ago
#

set oldserverpath = $1
set newserverpath = $2
set firstdaylimit = $3
set lastdaylimit = $4
#------------------------------------------------
#set newserverpath = /home/user/work/gin/eqna
if ( ! -e $oldserverpath) then
echo "The old directory is not found. Exit."
exit 1
endif
#------------------------------------------------
cd $oldserverpath
# copy folders
@ i = $firstdaylimit
while ( $i  <= $lastdaylimit )
set foundfiles = `find . -type d -mtime $i`
if ( `find . -type d -mtime $i` == "" ) then
echo "Not found files $i days ago. Continue."
else if ( `find . -type d -mtime $i` != "" ) then 
echo "copy the found files $i days ago to gsat"
scp -r $foundfiles "user@gsat:"$newserverpath
find . -type d -mtime $i -exec rm -rf {} \;
endif
@ i += 1
end
# copy files 
@ i = $firstdaylimit
while ( $i  <= $lastdaylimit )
set foundfiles = `find . -type f -mtime $i`
if ( `find . -type f -mtime $i` == "" ) then
echo "Not found files. Continue."
else if ( `find . -type f -mtime $i` != "" ) then 
echo "copy the found files to gsat"
scp  $foundfiles "user@gsat:"$newserverpath
find . -type f -mtime $i -exec rm -f {} \;
endif
@ i += 1
end






add in the end of line, sed

1) in sed, "^" represents the begining of a line, and "$" the end of line, "^$" a blank line

2) if we want to add a string in the end of each line of a text file and save it to a new file:


#!/bin/csh
# find each file starting with "input" and apply the changes
foreach i ( `ls input*` )
    # get the last string in the filename separated by _
    set append = `echo $i | awk '{split($0,a, "_");print a[6]}' `     


    # add _elim_1fois at the end of each line and save it to a new file
    sed    's/$/_elim_1fois/'   $i  >> $i"_elim_1fois_"$append 
   
    sed   's/$/_elim_2fois/'    $i  >> $i"_elim_2fois_"$append


end

mercredi 7 mars 2012

some useful sites about gnuplot

A collection of useful websites about gnuplot.....
1) http://www.gnuplot.info/demo_canvas/
provides a lot of demonstration of using gnuplot with the LATEST gnuplot version

2) http://physicspmb.ukzn.ac.za/index.php/Gnuplot_tutorial
provides a simple tutorial for start-up

3) http://t16web.lanl.gov/Kawano/gnuplot/intro/basic-e.html
also provides some examples but in version 4.0

4) http://www.phyast.pitt.edu/~zov1/gnuplot/html/index.html

plot in gnuplot4.4

Here are some often-used commands of gnuplot ........


0) help set xlabel
to get help

1) plot "datafile1"  using   index1_of_column_to_be_plotted   t  "titlename", "datafile2"  using   index2_of_column_to_be_plotted   t  "titlename",  " "  using   index3_of_column_to_be_plotted   t  "titlename",


This will plot the column of data with index1_of_column_to_be_plotted from datafile1, where "t" for command title; then plot the column of data with index2_of_column_to_be_plotted from datafile2; then plot the column of data with index2_of_column_to_be_plotted from datafile2 again

2) set xtics Integer
where the integer is the show the tics in a step of Integer

3) set xrange [Int1: Int2]

4) set title "string"

5) set xlabel "string"

6) set terminal jpeg medium
to set the figures to be saved in jpeg format

7) set output "filename"
set the output filename to be saved


A simple summary from http://physicspmb.ukzn.ac.za/index.php/Gnuplot_tutorial is copied below:

Make a titleset title 'Graph of velocity versus time'
Label x axisset xlabel 'Time(s)'
Label y axisset ylabel 'Velocity(m/s\^2)'
Adjust the tick marksset xtics 0.1
set ytics 0.1
Adjust no. of minor tick marksset mxtics 4
Move the keyset key bottom left
set key top right
Change the key textplot sin(x) title "new text"
Remove the keyunset key
Set plot aspect ratioset size square
 :A wide plotset size ratio 0.25
 :A narrow tall plotset size ratio 2
Adjust the xrangeset xrange[-3:5]
Adjust the lower xrange, upper autoset xrange[-3:*]
Adjust the upper xrange, lower autoset xrange[*:5]
Use a logarithmic x axisset logscale x
Use a logarithmic y axisset logscale y
Draw a grid on major ticsset grid y

install or update a software in a server

I work in a server and uses the gnuplot for making figures. I found the gnuplot does not work well with some new functions.

First I tried to find out the version of gnuplot in the server.
1) which gnuplot
/usr/bin/gnuplot
2) /usr/bin/gnuplot --version
4.0
 this is an old version and I want to install the new version. Therefore I go to sourceforge http://sourceforge.net/projects/gnuplot/files/gnuplot/4.4.0/ to download the version 4.4,
and put the downloaded file in a temporary directory in my home in the server. Then I tried to install it in the server !

3) To build it:
     ./configure --PREFIX=$HOME/software
Because I do not have permission to install software in/usr/local, I use --PREFIX=$HOME/software  to install it in this path.
     make
6) test it
     make check
7) install it
     make install

8) finally I add an alias in the .bashrc and .tcshrc (and the .tcshrc_profile)
alias gnuplot4="/home/username/software/bin/gnuplot"
add in the .tcshrc (and the .tcshrc_profile)
alias   gnuplot4    home/username/software/bin/gnuplot
I also add the GNUPLOT_PS_DIR in the tcshrc and bashrc.
set GNUPLOT_PS_DIR = /home/username/software/


9) when I try to call a gnuplot program,
gnuplot4    histogram.plt




mardi 6 mars 2012

define the path and name of a file by a double-quote ?

1) In csh, it does not work if we try to save the tt from current directory into a new directory ~/dire1/dire2/ with a new filename  "~/dire1/dire2/aa_temp" as below:

set appendix_name = temp
cat tt  > "~/dire1/dire2/aa_"$appendix_name  (Not work)

The csh can not find the file ~/dire1/dire2/aa_temp !! (No such file or directory)
The same thing happens if we:

ls "~/dir1/dir2"           (Not work)

2) It works if we remove the double quote:
cat tt   >  ~/dire1/dire2/aa_temp   (work !)
ls ~/dir1/dir1

3) it also works if we give the complete path with a double quote:
cat tt >   "/home/user/dire1/dire2/aa_temp"        (work !)
ls   "/home/user/dir1/dir2"  




The double quote defines the string. but this string is not recognized by csh as a proper path/file, if it does not has the complete path.  

awk: NF; NR

1) get the last column (field) of the results of grep
grep ss filename | awk '{print $NF}' 

2) get the second last column:
grep ss filename | awk '{print $(NF-1)}' 

3) get the number of columns (if the array is uniform):

grep ss filename |  awk 'END {print NF}'

4) get the number of lines (if the array is uniform):
grep ss filename |  awk 'END {print NR}'

5) get the second last line:
grep ss filename |  awk 'END {print $(NR-1)}'

6) for an array in text file percentage_data, if we want to get the mean of each column:

# get the number of field in the array
set number_field = ` awk 'END{print NF}' percentage_data ` 

# for each column, find the mean and output the means in a new file 
# averaged_percentage_MII_MI

@ j = 1
while ( $j <= $number_field)
    set mean_percentage_temp = `awk -v fd=$j {print $fd}' percentage_data | awk '{sum+=$1}END{print sum/NR}'`
    echo $mean_percentage_temp " " >> averaged_percentage_MII_MI

    @ j += 1

end





vendredi 2 mars 2012

"at" for schedule a job to be launched

1) at   now   +20 hours -f   lance_la_1year_elimobs
to launch the script 20 hours later from now

2) at 2am  tomorrow -f lance_la_1year_elimobs
to launch the script at 2am tomorrow


3) atq
to show the scheduled job


4) atrm jobID
to remove a scheduled job


http://www.techrepublic.com/blog/opensource/one-time-scheduling-of-tasks-with-at/260

http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/

numeric calculation with shell or awk

1) For processing two variables with one number given to each variable

Shell language is usually not used for complex scientific calculation. But sometimes we may will use it to do some simple calculations with text files. What can we do ?

set  a = 1.25E-02
set b = 3.289E-02

There are two ways to calculate numerically with these two variables in the shell language: either use the echo + bc, or the awk.
For example, when we do the addition:
1) echo "$a+$b" | /usr/bin/bc 
2) echo "$a $b" | awk '{print $1+$2}'

For comparisons:
1) echo "$a > 0 &  $b > 0" | /usr/bin/bc
2) echo "$a  $b" | awk '{if ($1 > 0 & $2 > 0) print 1; else print 0}'
3) echo "$a" | awk '{if ($1 > 1 || $1 < -1) print 1; else print 0}'

If we have to do some complicated computation with shell,  the AWK is more accurate than the echo + bc, esp when the data is the the scientific format (eg., 1.236778E-02).


2) For processing the data from different files
---- if want to substract two columns of data which are from two files file1 and file2 respectively:
cat file1   | awk '{column1=$which_column_in_file1; getline <"file2"; print column1 - $which_column_in_file2}'