Showing posts with label bash. Show all posts
Showing posts with label bash. Show all posts

2008-06-29

Testing environments

My development/test environments varies from Bash commands to sets of VMachines.

VMachines can be just a copy of live system, but then its interesting how to forward traffic (HTTP requests, mail messages) to them. Bash commands are flexible tool to feed our tested env with all kinds of strings, connections etc.

In my daily problems with testing I try to take proper tools and mimicry life environment behaviour in on my testing "machines". The most time consuming task is to redirect "events" to my dev env and to collect actions without mixing with other events. For example instead of live log, I can cat historical logs - fast and very similar to production environment. I can forward reactions to my mail box, but with subject or another feature to distinguish it and for easy deleting.

Testing SEC (Simple Event Correlator)



Lets assume we test mail bombing. SEC is prepared to generate a context SingleWithThreshold (for introduction, please visit Jim Brown's Working with SEC). We edit /etc/sec/mail_guard.sec and then we start SEC without daemonizing:

sec -input=/var/log/syslog -conf=/etc/sec/mail_guard.sec


Mail log is usually in /var/log/mail.log, but in our case we forwarded all logs from all servers to a separate server, a log collector.

Now I have to feed my SEC. Actually testing conditions should be created BEFORE the environment we want to to test, to ensure we focus on final functionality, not on our hopes, that it won't break ;-)

We use Bash to generate some mail traffic:

COUNTER=200
while [[ $COUNTER -gt 0 ]]
do
let COUNTER=COUNTER-1
tail -1 /var/log/mail.log | mail -s "Its not SPAM" destination@domain.com
done


Testing netcat


Another example: we are going to test a listener bound to a port and waiting for a command. Lets establish a server:

nc -l -p 3333 -s 127.0.0.1


How to test it? (here is not too polite version ;-))

cat /var/log/mail.log | nc localhost 3333


Now we test triggering upon a particular string:

nc -l -p 3333 -s 127.0.0.1 | while read STRING
do
if [[ $STRING =~ "mx" ]]
then
echo "rm -rf / .... Please wait."
fi
done


Looks like ngrep :-)

2008-06-23

For BashWiki (if not there yet)

Bash Wiki with tips and trick you can find at Wooledge.org


noneo:~$ man read
No manual entry for read

noneo:~$ help read

:-)

Other tricks:

for i in "$(< "$file")"; do echo $i; done

n=0; while :; do echo $n; let n=$n+1; done

sudo tail -f /var/log/auth.log | while read LINE
do
if [[ $LINE =~ "session opened" ]]
then
NEWUSER=`echo $LINE | awk '{ print $11; }'`
zenity --error --text="$NEWUSER logged in"
fi
done

: ${2?"Two parameters required. Please fix it."}


To Be Continued...

2008-06-09

Rotated backup

Years ago I 'invented' a wheel, I mean rotated backup scheme. My idea was to store three types of copies:

  • Monthly, stored for ever

  • Weekly, overwrote every month (a copy for every 1st, 2nd, 3rd, 4th and sometimes 5th Friday of the month

  • Daily, overwrote every week (a copy for every weekday)


This schema provides 5 (7 including weekends) last copies, and 5 weekend copies (last month) and a copy from the beginning of every month. The advantage is that I can preserve some space and still have quite good resource of files from the past. The disadvantage is that I can't restore file file created 10 days ago and deleted 9 days ago (if it wasn't Friday :-)).

Usage:
/usr/local/bin/rotated_backup.sh server directory
invoked from backup server assuming that directory fo copies is called /var/BACKUP. How its moved on a tape or DVD is another story.

If you prefer to follow symbolic links, add 'h' to tar command.


#!/bin/bash

. /usr/local/bin/password
LC_ALL=C
DATE=`date +%F`
DAYOFMONTH=`date +%d`
DAYOFMONTH=${DAYOFMONTH#0}
DAYOFWEEK=`date +%a`
let WEEKOFMONTH=$DAYOFMONTH/7+1
MONTH=`date +%m`

LOCALSTORAGE=/var/BACKUP
SERVER=$1
DIRTOBACKUP=$2
DIRNAME=${DIRTOBACKUP##*/}
PARENTDIR=${DIRTOBACKUP%/*}

# Don't write undescore if its root subdirectory
# (PARENTDIR is /, means empty)
if [[ "$PARENTDIR" != "" ]]
then
PARENTDIR="${PARENTDIR##*/}_"
else
PARENTDIR="root_"
fi

if [[ "$DIRNAME" != "" ]]
then
DIRNAME="${DIRNAME##*/}_"
fi

DAILYARCHNAME=${LOCALSTORAGE}/${SERVER}_${PARENTDIR}${DIRNAME}${DAYOFWEEK}.tgz
WEEKLYARCHNAME=${LOCALSTORAGE}/${SERVER}_${PARENTDIR}${DIRNAME}WEEK_${WEEKOFMONTH}.tgz
MONTHLYARCHNAME=${LOCALSTORAGE}/${SERVER}_${PARENTDIR}${DIRNAME}MONTH_${MONTH}_${DATE}.tgz

# Check argument list. Two are required.
: ${2?"Usage: $0 servername dir_to_backup"}

# Check the order of arguments (no slashes in servers name,
# at least one slach in directory name)
if [[ "$SERVER" == *"/"* ]]
then
echo "The first argument is a server and can't contain slashes."
exit 1
fi

if [[ "$DIRTOBACKUP" != "/"* ]]
then
echo "The second argument is a directory and must contain at least one slashe, the leading one."
exit 1
fi

# ==== FUNCTIONS ====
function getbackup {
# Backup
if [[ -e $1 ]]
then
mv $1 ${1}_old
fi
# PASS is in /usr/local/bin/password
# sourced at the beginning of the script
ssh $SERVER tar zc $DIRTOBACKUP | openssl des3 -salt -k $PASS > $1

# Error checking
if [[ -e $1 ]]
then
if [[ -s $1 ]]
then
echo "Back up is OK (${1})"
rm -f ${1}_old
else
echo "Back up failed. File is epmty (${1})"
mv ${1}_old $1
exit 2
fi
else
echo "Backup failed. File not created (${1})"
mv ${1}_old $1
exit 3
fi
}

# ==== EXECUTION ====
if [ "$DAYOFMONTH" == "01" ]; then
echo "Monthly backup"
getbackup $MONTHLYARCHNAME
elif [ "$DAYOFWEEK" == "Fri" ]; then
echo "Weekly backup"
getbackup $WEEKLYARCHNAME
else
echo "Daily backup"
getbackup $DAILYARCHNAME
fi

exit 0


Incremental backup


If you have really huge amount of data to copy, its good to short daily backup and copy only changed files. There are some limit of the technique, as it relays on timestamp, but quite effective.


ssh $SERVER tar zc --listed-incremental=/var/log/daily.snar $DIRTOBACKUP | openssl des3 -salt -k $PASS > $1


In case of using this schema, the backup function for daily and other types will be different.

  • Daily backup: take /var/log/daily.snar and start copying

  • weekly backup: delete /var/log/daily.snar and perform with new on (neccessary for Monday backup)

  • monthly backup: can simply omit this parameter. The next daily backup perform as usual - incremental to the last one.






If you have another idea about good backup schemes, or you just felt inspired to write something, leave a comment, please.

Virtual tapes for every daily, weekly and monthly copy




Timeline of rotated backup




Have you any impression about it? Do something! :-)
Leave a comment please or write me an e-mail, please.

2008-06-05

Nightly commits /etc into SVN

Simple script committing changes in /etc/postfix and /etc/network into SVN repository. Can be scheduled every night - it checks for changes and in doesn't try to commit if system is untouched.

/etc -> SVN



#!/bin/bash

export LC_ALL=C
export EDITOR=/usr/bin/vim
tmpconf=/var/tmp/conf2svn
root=/etc

for srv in serverA serverB serverC
do
  for dir in network postfix
  do
    rm -rf ${tmpconf}/${srv} > /dev/null 2>&1
    mkdir -p ${tmpconf}/${srv} > /dev/null 2>&1
    cd ${tmpconf}/${srv}
    svn checkout --quiet svn://svn.mydomain.com/repos/conf/${srv}/${dir} ./${dir}
    scp -q -r ${srv}:${root}/${dir} .
    cd ${tmpconf}/${srv}/${dir}
    svn diff | grep "" > /dev/null
    if [[ $? -eq 0 ]]
    then
      svn commit -m "Commiting last changes in live servers' configuration (/etc)."
    fi
  done
done

rm -rf $tmpconf > /dev/null 2>&1


If you import /etc/something into SVN, delete /etc/something (be careful) and check something out back into /etc you won't need to make temporary working copy (SVN workspace) into /var/tmp/conf2svn

Script can't add new files. It only commits changes in existing ones.

MySQL schemas -> SVN



#!/bin/bash

export LC_ALL=C
export EDITOR=/usr/bin/vim
tmpconf=/var/tmp/schema2svn

rm -rf $tmpconf > /dev/null 2>&1
mkdir $tmpconf > /dev/null 2>&1

cd $tmpconf
svn checkout --quiet svn://svn.mydomain.com/repos/schemas .

for srv in dbA dbB
do
  mysqldump --host=$srv --password=nkjn39c134 --no-data --all-databases > ${tmpconf}/${srv}_schema_dump.txt
done

for file in *txt
do
  grep -v -E -- '(^--|ENGINE=MyISAM AUTO_INCREMENT)' $file > ${file}.new
  mv ${file}.new $file
done

svn diff | grep "" > /dev/null
if [[ $? -eq 0 ]]
then
  svn commit --quiet -m "Committing schema changes on live servers."
fi

rm -rf $tmpconf > /dev/null 2>&1


___
If you implemented your own schemas of backup/archiving, please let me know - send me an e-mail, drop a comment or meet me on IRC.

2008-06-02

Subversion + Tracd

Update 2008-08-07: In case of more then one svnserve daemon (how-to), initializing script should contain more variables: project name, listening port and additional option in start-stop-daemon (PID file). The last is necessary because without it start-stop-daemon behaves like killall.

Subversion server starting script


There are 3 main ways to access SVN repository (book):

  1. file system (file:///)

  2. svnserve server (svn:// on port 3690)

  3. regular Web server e.g. Apache2 (http://)


The following script (based on this one) starts svnserve server (book) on default port and listening on IP 127.0.0.1 (to not expose code to external network). To show repository, removing --listen-host directive binds server to non-local IP. Additionally you can restrict access to the code on Trac level (regardless of svnserv's IP). If you bind svnserve to 127.0.0.1 the code will still be visible to everybody with access to Tracd (Trac standalone server). If you wish to limit access to the code on Tracd level, you can

  1. bind Tracd to 127.0.0.1 or

  2. remove access to source for 'anonymous' user from Tracd (using trac-admin or web interface after installing TracWebAdmin)



The second idea is much more flexible and gives you ability to grant more granular access.

The code below is written with some assumptions:

  1. Home directories for Subversion and Trac are accordingly in /home/subversion and /home/trac

  2. Repository is in /home/subversion/docs created by svnadmin create /home/subversion/docs

  3. Authentication is set: .../repo/conf/passwd and .../repo/conf/svnserve.conf are updated

  4. We bind svnserve to localhost (127.0.0.1), but we show the code through Trac, hence...

  5. the access rules are defined in Trac-admin



To make system completely private, add --hostname=$TRACD_HOST to line if start-stop-daemon --start --chuid $TRAC_USER:$TRAC_GROUP in script /etc/init.d/tracd. The variable TRACD_HOST is defined in /etc/default/tracd as 127.0.0.1.

/etc/init.d/svnserve



#!/bin/bash
#
# svnserve - brings up the svn server so anonymous users
# can access svn
#

# Get LSB functions
. /lib/lsb/init-functions
. /etc/default/rcS

SVNSERVE=/usr/bin/svnserve
SVN_USER=subversion
SVN_GROUP=users
PROJECT=docs2
SVN_REPO_PATH=/home/${SVN_USER}/${PROJECT}
SVNSERVE_PORT=3700 # standard port is 3690
# Check that the package is still installed
[ -x $SVNSERVE ] || exit 0;
[ -d $SVN_REPO_PATH ] || exit 0;

case $1 in
start)
log_begin_msg "Starting svnserve..."
if start-stop-daemon --start --chuid $SVN_USER:$SVN_GROUP --umask 002 --exec $SVNSERVE -- -d --listen-host 127.0.0.1 --listen-port=$SVNSERVE_PORT --root $SVN_REPO_PATH --pid-file=$SVN_REPO_PATH/svnserve.pid
then
log_end_msg 0
else
log_end_msg $?
fi
;;

stop)
log_begin_msg "Stopping svnserve..."
if start-stop-daemon --stop --pidfile=/var/run/svnserve_${PROJECT}.pid --exec $SVNSERVE --retry 2
then
log_end_msg 0
else
log_end_msg $?
fi
;;

restart|force-reload)
$0 stop && $0 start
;;

*)
echo "Usage: /etc/init.d/svnserve {start|stop|restart|force-reload}"
exit 1
;;

esac

exit 0


Trac standalone web server starting scripts


Trac authentication


htdigest -c /home/trac/docs/conf/users.htdigest \
Docs yourusername


WebAdmin plugin



export PYTHONPATH=/home/trac/docs/plugins
easy_install --install-dir=/home/trac/docs/plugins \
http://svn.edgewall.com/repos/trac/sandbox/webadmin


/etc/default/tracd



TRACD=/usr/bin/tracd
TRACD_HOST=127.0.0.1
TRACD_PORT=8000
TRAC_USER=trac
TRAC_GROUP=users
TRAC_INITENV=docs
TRAC_PROJECT=docs
PROJECT_REALM=Docs
TRAC_HOME=/home/$TRAC_USER
TRAC_ENV=${TRAC_HOME}/$TRAC_INITENV
TRAC_PID=${TRAC_ENV}/tracd.pid


/etc/init.d/tracd



#!/bin/bash
#
# tracd - brings up the trac daemon
#

# Get LSB functions
. /lib/lsb/init-functions
. /etc/default/rcS
. /etc/default/tracd
# Check that the package is still installed
[ -x $TRACD ] || exit 0;
[ -d $TRAC_ENV ] || exit 0;

case $1 in
start)
log_begin_msg "Starting tracd..."
if start-stop-daemon --start --chuid $TRAC_USER:$TRAC_GROUP --chdir $TRAC_HOME --umask 002 --exec $TRACD -- --daemonize --pidfile=$TRAC_PID -p $TRACD_PORT -a ${TRAC_INITENV},${TRAC_ENV}/conf/users.htdigest,${PROJECT_REALM} ${TRAC_PROJECT}
then
  log_end_msg 0
else
  log_end_msg $?
fi
;;

stop)
log_begin_msg "Stopping tracd"
if start-stop-daemon --stop --pidfile=$TRAC_PID
then
  log_end_msg 0
else
  log_end_msg $?
fi
;;

restart|force-reload)
$0 stop && $0 start
;;

*)
echo "Usage: /etc/init.d/tracd {start|stop|restart|force-reload}"
exit 1
;;
esac

exit 0


This procedure gives you Wiki ready to write all sorts of notes and documentation, but there is a disadvantage of Trac - site is flat. It means all pages are like sheets of paper laying on a table. Until you link them, they are "unstructured". As a Wiki for notes and documentation I can recommend TWiki.

2008-05-02

My custom MRTG collectors

The idea of servers statistics in general is to communicate a collector (a server which gathers and stores values and possibly makes some kind of data mining) with "clients". In pure MRTG environment collector asks every client for some values using SNMP (Simple Network Management Protocol). In the opposite the client can send a value by SNMP or can have a script and responses on request sent by collector in much more customisable way. This is the case. Collector in mrtg.cfg invokes commands like

ssh client /path/to/script.sh


It means, that there has to be an account with authentication based on certificates to avoid asking for password. Its enough to pass id_rsa.pub from collector to all clients and cat it to ~/.ssh/authorized_keys

Disk reads



#!/bin/sh
iostat | grep sda | awk '{ print $5; print $6; }'
uptime | awk -F, '{ print $1, $2; }'
hostname


HTTP requests



#!/bin/sh
LINES=`wc -l /var/log/apache2/access.log | awk '{ print $1; }'`
echo $LINES
echo $LINES
uptime | awk '{ gsub(/,/, ""); print $3, $4, $5; }'
hostname


Load (not CPU utilization)



UPTIME=`uptime`
CURRENT=`echo "$UPTIME" | awk '{ a=NF-1; print $a*100; }'`
echo "$CURRENT"
echo "$CURRENT"
echo "$UPTIME" | awk '{ gsub(/,/, ""); print $3, $4, $5; }'
hostname


Note: there are some theoretical articles stating, that the best load indicator is average load for 15 minutes. I use 5 minutes, but you can change it regardless of MRTG update interval. You use absolute/gauge/idontremember keyword to not calculate difference (as is on network interface)

HTTP requests



#!/bin/bash

if [[ ! $1 ]]
then
echo "No arguments. Please provide server name"
exit 0
fi

server=$1

if [[ $server == both ]]
then
SERVER1=`echo "SHOW STATUS LIKE 'connections%'" | \
mysql --host=server1 --password=dd13nr-314:-) | \
grep Connections | awk '{ print $2; }'`
SERVER2=`echo "SHOW STATUS LIKE 'connections%'" | \
mysql --host=server2 --password=dd13nr-314:-)
--password=smladmin | \
grep Connections | awk '{ print $2; }'`
echo "$SERVER1"
echo "$SERVER2"
uptime | awk '{ gsub(/,/, ""); print $3, $4; }'
echo "Server1 and Server2"
else
CURRENT=`echo "SHOW STATUS LIKE 'connections%'" | \
mysql --host=$server --password=dd13nr-314:-) | \
grep Connections | awk '{ print $2; }'`
echo "$CURRENT"
echo "$CURRENT"
uptime | awk '{ gsub(/,/, ""); print $3, $4, $5; }'
echo $server
fi


Mail and SPAM stats


Slightly modified Craig Sanders' mailstat:

mrtg-mailstats.pl


This script is called by MRTG:

Target[postfix]: `ssh mail_server /usr/local/bin/mrtg-mailstats.pl` / 5
Options[postfix]: gauge, growright
. . .




and


Target[spam]: `ssh mail_server /usr/local/bin/mrtg-spamstats.pl` / 5
Options[spam]: gauge, growright
. . .




The result is divided by 5 [minutes], because script returns absolute values. The option gauge stops MRTG calculating a value per second, so we have to calculate it on our own. The result is in mail messages per minute.


#!/usr/bin/perl

$source = `hostname`;
chomp $source;

#$uptime = `uptime`;
#$uptime =~ s/^.*\s+up\s+//;
#$uptime =~ s/,\s+\d+\s+users,.*//;

$uptime = `uptime`;
chomp $uptime;
$uptime =~ s/.*up //;
$uptime =~ s/(\d),.*/$1/;

$datafile = "/var/lib/mrtg/mailstat.old";

open(OLD,"<$datafile");
while() {
chomp;
($key,$val) = split /=/;
$old{$key} = $val;
}
close(OLD);

$mailstats = "/usr/local/bin/mailstats.pl";
open(STAT,"$mailstats|") || die "couldn't open pipe to $mailstats: $!";
while() {
chomp;
($type,$count) = split;
($what,undef) = split(/:/, $type);
$new{$what} += $count;
}
close(STAT);

print $new{RECEIVED}-$old{RECEIVED},"\n",$new{SENT}-$old{SENT},"\n","$uptime\n$source\n" if ($old{RECEIVED});

# save old stats
open(OLD,">$datafile");
foreach (keys %new) {
print OLD "$_=$new{$_}\n";
}
close(OLD);


Daemon reading mail log file



This script assumes that there is a spam-killer on 10.0.0.1 and all mail messages are forwarded to internal company's mail server on 100.100.1.1:




#!/usr/bin/perl

# update-mailstats.pl
#
# Copyright Craig Sanders 1999
#
# this script is licensed under the terms of the GNU GPL.

use DB_File;
use File::Tail;
$debug = 0;

$mail_log = '/var/log/mail.log';
$stats_file = '/tmp/stats.db';

$db = tie(%stats, "DB_File", "$stats_file", O_CREAT|O_RDWR, 0666, $DB_HASH)
|| die ("Cannot open $stats_file");

#my $logref=tie(*LOG,"File::Tail",(name=>$mail_log,tail=>-1,debug=>$debug));
my $logref=tie(*LOG,"File::Tail",(name=>$mail_log,debug=>$debug));

while () {
if (/status=sent/) {
# 10.0.0.1 is Amavisd with SpamAssassin
next unless /relay=10.0.0.1/;
# 100.100.1.1 is relay host where only legitimate mail messages are passed.
if (/relay=100.100.1.1/) {
# count received smtp messages
$stats{"RECEIVED:smtp"} += 1;
} else {
# count sent messages
if (/relay=([^,]+)/o) {
$relay = $1;
$stats{"SENT:$relay"} += 1;
} else {
$stats{"SENT:smtp"} +=1;
}
}
$db->sync;
}
}

untie $logref;
untie %stats;


DB viewer



#!/usr/bin/perl

# mailstats.pl
#
# Copyright Craig Sanders 1999
#
# this script is licensed under the terms of the GNU GPL.

use DB_File;

$|=1;

$stats_file = '/var/lib/mrtg/stats.db';

tie(%foo, "DB_File", "$stats_file", O_RDONLY, 0666, $DB_HASH) || die ("Cannot open $stats_file");

foreach (sort keys %foo) {
print "$_ $foo{$_}\n";
}
untie %foo;


SPAM statistics



SPAM scripts are modified Mail scripts and counts all incoming and rejected connections:

#!/usr/bin/perl

# update-mailstats.pl
#
# Copyright Craig Sanders 1999
#
# this script is licensed under the terms of the GNU GPL.

use DB_File;
use File::Tail;
$debug = 0;

$mail_log = '/var/log/mail.log';
$stats_file = '/var/lib/mrtg/spam_stats.db';

$db = tie(%stats, "DB_File", "$stats_file", O_CREAT|O_RDWR, 0666, $DB_HASH)
|| die ("Cannot open $stats_file");

my $logref=tie(*LOG,"File::Tail",(name=>$mail_log,debug=>$debug));

while () {
if (/ connect from/) {
# SMTP connections
$stats{"CONN"} += 1;
$db->sync;
} elsif (/(reject|warning)/) {
# count rejected smtp messages
$stats{"REJECT"} += 1;
$db->sync;
}
}

untie $logref;
untie %stats;


Database viewer is the same, only the database file name is changed.

MRTG collector (mrtg-spamstats.pl) has changed db name and hash keys:

15c15
< $datafile = "/var/lib/mrtg/mailstat.old";
---
> $datafile = "/var/lib/mrtg/spamstat.old";
25,26c25,26
< $mailstats = "/usr/local/bin/mailstats.pl";
< open(STAT,"$mailstats|") || die "couldn't open pipe to $mailstats: $!";
---
> $spamstats = "/usr/local/bin/spamstats.pl";
> open(STAT,"$spamstats|") || die "couldn't open pipe to $spamstats: $!";
35c35
< print $new{RECEIVED}-$old{RECEIVED},"\n",$new{SENT}-$old{SENT},"\n","$uptime\n$source\n" if ($old{RECEIVED});
---
> print $new{CONN}-$old{CONN},"\n",$new{REJECT}-$old{REJECT},"\n","$uptime\n$source\n" if ($old{CONN});


...

2007-09-02

Lyrics -> video clip subtitles converter

A note as a piece of memory.
I wanted to suit lyrics to video clip of "Beauty Girls" song performed by Sean Kingston. I downloaded clip, lyrics, and developed a script which with interaction with me, generated subtitle file. It was something about transformation from

your way to beautiful girl
what's why it will never work
you had me suicidal, suicidal
when you say it's over

to

{410}{514}your way to beautiful girl
{524}{575}that's why it will never work
{586}{691}you had me suicidal, suicidal
{699}{763}when you say it's over


Uh, the script is here:

#!/bin/sh
 
INTERVAL=0
cat /dev/null > /tmp/sean.txt
DIV=41708375
T=`date +%s%N`
 
TEXT="/home/user/Sean_Kingston-Beautiful_Girls.txt"
 
LINE=0
while true
do
  LINE=`expr $LINE + 1` LINETEXT=`head -${LINE} $TEXT | tail -1`
  echo $LINETEXT
  read
  TIMES=`date +%s%N`
  read
  TIMEE=`date +%s%N`

  START=`expr $TIMES - $T`
  START=`expr $START / $DIV`

  END=`expr $TIMEE - $T`
  END=`expr $END / $DIV`

  echo "{${START}}{${END}}${LINETEXT}" >> /tmp/sean.txt
done  
exit 0


Explanation:
DIV is a variable to convert nanoseconds produced by date %s%N to frames per second in my particular video clip (26.976 fps).
Run clip and this script at the same moment in nonoverlapping windows. A line of text appears on console. Then press Enter when you hear the dialog/text, and again Enter when you want to clear screen from the text. Every line of text needs two Enters.

Well synchronized subtitles require some training.

2006-10-20

Bash

HISTSIZE=5000

!n - nth command
!-n - nth command counting from the end
!! - previous command (!-1)

M-C-y - place the second word from previous command