All notes
Linux

Usual scripts

build


#!/bin/bash

clear
[ ! -d build/release ] && mkdir -p build/release
./configure --prefix=$(pwd)/build/release
make -j 8 all
make install

Static libs

Using "ar" to bundle static libraries:


g++ -isystem ${GTEST_DIR}/include -I${GTEST_DIR} -pthread -c ${GTEST_DIR}/src/gtest-all.cc
# Archive .o into .a file. Now we have this static lib!
ar -rv libgtest.a gtest-all.o

Makefile

Out of source build with make.


OUT = lib/alib.a
CC = g++
ODIR = obj
SDIR = src
INC = -Iinc

_OBJS = a_chsrc.o a_csv.o a_enc.o a_env.o a_except.o \
    	a_date.o a_range.o a_opsys.o
OBJS = $(patsubst %,$(ODIR)/%,$(_OBJS))


$(ODIR)/%.o: $(SDIR)/%.cpp 
    $(CC) -c $(INC) -o [email protected] $< $(CFLAGS) 

$(OUT): $(OBJS) 
    ar rvs $(OUT) $^

.PHONY: clean

clean:
    rm -f $(ODIR)/*.o $(OUT)

System directories/ Config files

System directories

TLDP on linux filesystem hierarchy.

'Mountable' directories are: '/home', '/mnt', '/tmp', '/usr' and '/var'. Essential for booting are: '/bin', '/boot', '/dev', '/etc', '/lib', '/proc' and '/sbin'.

/tmp

TLDP.

About clearing /tmp

ServerFault.

/var/tmp

Stackoverflow comparison between /tmp and /var/tmp.

Config files

/etc/passwd

The file entry has the following fields separated by colons:

  1. Username
  2. Password
  3. User ID (UID)
  4. Group ID (GID)
  5. User ID Info (The comment field, e.g. fullname, phone number. used by finger command).
  6. Home directory
  7. Command/shell

/etc/shadow

http://www.cyberciti.biz/faq/understanding-etcshadow-file/.

/etc/group

http://www.cyberciti.biz/faq/understanding-etcgroup-file/.

Format: group_name:password:GroupID:GroupList
GroupList: a list of names of users who are in this group.

# Find out the groups a user is in:
groups username

# Display the group ID
id -gn username
id -Gn username

# Display all group names:
# cut: -d, delimiter. -f, fields.
cat /etc/group | cut -d: -f1

/etc/fstab

file_system    dir    type    options    dump    pass

UUID=96250b0e-ae8d-4121-a907-01a4e4c3550f /                       ext4    defaults        1 1
UUID=23e3cbcd-78d4-4b3b-9902-2dab93dff58b /boot                   ext4    defaults        1 2
UUID=66e52c5c-ec53-42bc-ae92-7d8a60f49975 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Mount options

Universal:

Applied in some filesystem types:

/etc/sysctl.conf, /proc/sys/*

Wordpress. sysctl.conf is the file controlling every configuration under /proc/sys.


cat /proc/sys/net/ipv4/ip_forward
# Turn on IP forwarding.
echo 1 > /proc/sys/net/ipv4/ip_forward
# Make it permanent: modify "/etc/sysctl.conf"
net.ipv4.ip_forward = 1
# Make the modification take effect immediately.
sysctl -p /etc/sysctl.conf

Others

Devices

/dev/urandom

HowToGeek.


< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c\${1:-32};echo;

# Generate random string:
< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c8 && echo;
# eYp84GQA

# Generate random number:
< /dev/urandom tr -dc _0-9 | head -c6 && echo;
# 583600

strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 8 | tr -d '\n'; echo
# Z5HfCxQ7

File systems

Basics

Character devices
Character special files or character devices provide unbuffered, direct access to the hardware device. They do not necessarily allow you to read or write single characters at a time; that is up to the device in question. The character device for a hard disk, for example, will normally require that all reads and writes are aligned to block boundaries and most certainly will not let you read a single byte.
Character devices are sometimes known as raw devices to avoid the confusion surrounding the fact that a character device for a piece of block-based hardware will typically require you to read and write aligned blocks.
Block devices
Block special files or block devices provide buffered access to the hardware, such that "the hardware characteristics of the device are not visible." Unlike character devices, block devices will always allow you to read or write any sized block (including single characters/bytes) and are not subject to alignment restrictions. The downside is that because block devices are buffered, you do not know how long it will take before a write is pushed to the actual device itself, or indeed in what order two separate writes will arrive at the physical device; additionally, if the same hardware exposes both character and block devices, there is a risk of data corruption due to the clients using the character device being unaware of changes made in the buffers of the block device.
Most systems create both block and character devices to represent hardware like hard disks. FreeBSD and Linux notably do not; the former has removed support for block devices, while the latter creates only block devices. In Linux, to get a character device for a disk you must use the "raw" driver, though you can get the same effect as opening a character device by opening the block device with the Linux-specific O_DIRECT flag.

ntfs-3g

ARchWiki: NTFS-3G.


mount -t ntfs-3g /dev/your_NTFS_partition /mount/point
# Or
ntfs-3g /dev/your_NTFS_partition /mount/point

# Formatting
# As always, double check the device path.
# -Q speeds up the formatting by not zeroing the drive and not checking for bad sectors.
mkfs.ntfs -Q -L diskLabel /dev/sdXY

# Mount internal Windows partition with linux compatible permissions, i.e. 755 for directories (dmask=022) and 644 for files (fmask=133)
# fileSystem  dir  type  options  dump  pass
UUID=01CD2ABB65E17DE0 /run/media/user1/Windows ntfs-3g uid=user1,gid=users,dmask=022,fmask=133 0 0

Only root can mount

By default, ntfs-3g requires root rights to mount the filesystem, even with the "user" option in /etc/fstab. The reason is here.

wcfNote: ok, just sudo mount!

Unable to write

Metadata kept in Windows cache, refused to mount

When dual booting with Windows 8 or 10, trying to mount a partition that is visible to Windows may yield the following error:


The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
Failed to mount '/dev/sdc1': Operation not permitted
The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option.

The problem is due to a feature introduced in Windows 8 called "fast startup".
When fast startup is enabled, part of the metadata of all mounted partitions are restored to the state they were at the previous closing down. As a consequence, changes made on Linux may be lost.

Solutions:

Checksum

archLinux download.

inotify, fanotify

ibm.com: fanotify.

Jobs


jobs -l # List all jobs and their process IDs.

# Remove the shell's notion of the current job from active jobs' list.
disown
# Mark all jobs so the SIGHUP is not sent to the job from the shell.
disown -h -a
# Mark all running jobs and don't sent SIGHUP to them.
disown -h -r

Variables

Global vars

User vars

The dbus daemon and the user instance of systemd do not inherit any of the environment variables set in places like .bashrc etc.

Graphical applications

You can put your variables in xinitrc (or xprofile when using a display manager).

Locale

References

Help

The notion of locale is defined in the POSIX standard as in here. A program's locale defines:

Useful commands:


locale # check the current locale settings.

locale -a # Display all available locales. Find that there is zh_CN.utf8.
# Add in ~/.bashrc:
export LANG=zh_CN.utf8  # or en_US.utf8, in Linux.
# wcfNote: However, in Mac OS X, it's en_US.UTF-8. So just check with "locale -a" beforehand.
export LC_CTYPE=C

# http://stackoverflow.com/questions/30479607/explain-the-effects-of-export-lang-lc-ctype-lc-all
unset LC_ALL # unset the variable.
# Only run LC_ALL for testing some program.
LC_ALL=C program

locale -m # Display all available charmaps.
locale -c charmap # Display current charmap.

locale-gen

If there is any error "locale: Cannot set LC_CTYPE to default locale: No such file or directory"

  1. Uncomment the corresponding lines in the file /etc/locale.gen. For American-English uncomment en_US.UTF-8 UTF-8.
  2. locale-gen to generate locales. It also runs with every update of glibc.

LANG, LC_*

DebianWiki: Locale.Locale Categories:

Warning! Using LC_ALL is strongly discouraged as it overrides everything. Please use it only when testing and never set it in a startup file.

Users and Groups

Pluggable Authentication Modules (PAM)

http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-pam.html. RedHat uses PAM for centralized authentication. PAM uses a pluggable, modular architecture, and provides significant flexibility and control over authentication for both system admins and application developers.

Pam.d config

Each PAM configuration file contains a group of directives formatted as follows:

<module interface>  <control flag>   <module name>   <module arguments>

Module interfaces:

auth: requests and verifies the validity of a password, set credentials, such as group memberships or Kerberos tickets.
acount: check if a user account has expired or if a user is allowed to log in.
password: used for changing user passwords.
session: configures and manages user sessions. Mounting a user's home directory and making the user's mailbox available.
pam_unix.so provides all four module interfaces. This instructs PAM to use the pam_unix.so module's auth interface:
auth	required	pam_unix.so

Module interface directives can be stacked, so that multiple modules are used together for one purpose.
If and only if a module's control flag uses the "sufficient" or "requisite" value, then the order matters.

password requisite pam_unix.so nullok obscure md5
treats any passwordless account as being locked.

Control flags:

Man page is your friend to understand each PAM configurations:

man pam_unix
man pam_access
man pam_timestamp
man pam_securetty
man pam_nologin
man pam_cracklib
man pam_console
man pam_rootok
man pam_permit

pam_access

pam_access.so is a module for logdaemon style login access control. Its config file defaults to be /etc/security/access.conf. Use man pam_access to see the manual.

Look into /etc/security/access.conf, it has format:

permission : users : origins
Example:
# Disallow non-root logins on tty1
-:ALL EXCEPT root:tty1
# Disallow console logins to all but a few accounts.
-:ALL EXCEPT wheel shutdown sync:LOCAL
# Same, but make sure that really the group wheel and not the user wheel is used (use nodefgroup argument, too):
-:ALL EXCEPT (wheel) shutdown sync:LOCAL
# Disallow non-local logins to privileged accounts (group wheel).
-:wheel:ALL EXCEPT LOCAL .win.tue.nl

Example: only allow some user to login outside LAN

http://serverfault.com/questions/310459/allowgroups-and-match-address-for-ssh. To allow only remoteuser to login from outside LAN:

+ : root : 192.168.0.
+ : localonlygroup : 192.168.0.
+ : remoteuser: ALL
- : ALL : ALL
Then make sure /etc/pam.d/sshd, around where there's probably a line like:
account    required     pam_nologin.so
account    required     pam_access.so

NSS

NSS: Name Service Switch. Each call to a function which retrieves data from a system database like the password or group database is handled by the Name Service Switch implementation in the GNU C library.

nsswitch.conf

Here is an example /etc/nsswitch.conf file: (Ref)

passwd:         compat
group:          compat
shadow:         compat

hosts:          dns [!UNAVAIL=return] files
networks:       nis [NOTFOUND=return] files
ethers:         nis [NOTFOUND=return] files
protocols:      nis [NOTFOUND=return] files
rpc:            nis [NOTFOUND=return] files
services:       nis [NOTFOUND=return] files

Disable account

http://www.cyberciti.biz/faq/linux-disable-user-account-command/.


# chage - change user password expiry information.
# See current status of username:
chage -l username

# Lock the password, and set the account to expire in 1 day.
usermod -L -e 1 username
# Or use passwd to lock the password only
passwd -l username

# If the user have a SSH public key, you might wanna disable it too:
emacs /home/clem/.ssh/authorized_keys*

The locked username will have his/her password in /etc/shadow be preceded by ! mark, such as:

lockedBody:!$6$RqSFfGzD$QUqEzwWZsuKm21XgfTU2ZVRVtGcY70gbDs56Auia9Nq.IHQbkGJ9KQgkmK98F/bfHeA7SiMJ/3TCy9yPqEEV51:15901:0:99999:7::16354:

http:$6$v8AY2CVu$NfWV/iN4vbUS7L.zijLFcxjlpFrwqGEPPMUkdLAQuTwAfOI54QrfJAffNDW08rQIaJ35uO5h9PXMZUkQ2hp2O0:16211:0:99999:7:::

Set password at useradd?

This time, as a root, I add a user to the admin group wheel with:


useradd -g wheel -p minime minime

However after that, I found that I could not use this new user name and password to ssh log in to the Linux machine. The ssh -v [email protected] told me that "Permission denied...".
I went to see /etc/shadow and it says minime:minime:16289:0:99999:7:::, while the root has the row root:\$6\$yozXZehD\$.nxx/Gu/VKQlywn6ORTjahBakBmw3tovoXp2.:16289:0:99999:7:::.
This reference says the /etc/shadow has the format: Username:Password:... (for more pls see man 5 shadow). How dare my password is so direct while the root's password is encoded?
man usermod, it says:


-p, --password PASSWORD
	The encrypted password, as returned by crypt(3). The default is to disable the password.
	Note: This option is not recommended because the password (or encrypted password) will be visible by users listing the processes.
OK, I already know that it's not recommended to use this option to set the password since it could be seen in shell history. But I neglected the previous line "The encrypted password, as returned by crypt", which means the PASSWORD should be the crypted result, not your direct password! How come I could do this by myself since crypt(3) is just a c function? Anybody? (NOTE: the command pwconv, pwunconv are used to convert the whole shadow file to "/etc/shadow-", not to en/decrypt strings.)
OK, then as root I typed this to solve this problem finally: passwd minime. passwd is the right way to add or modify password.

As for the encrypted password, it may contain 13 to 24 characters from a-z, A-Z, 0-9, '.', '\' and '/'. Usually it is encoded in DES. If it starts with '\$d\$', the number d indicates the algorithm used in encryption. For example, "\$1\$" means MD5, "\$5\$" is SHA-256, and "\$6\$" is SHA-512. If it starts with '!', it is locked. If it is "*", it means the account has been disabled. For more info, see man 5 shadow.
this page provides in-depth information on the SHA encrytion. The shadow password value has the algorithm ID, salt and encrypted password separated by '\$' like \$ID\$SALT\$PWD.

Ref for nologin.


# Check if /sbin/nologin is in /etc/shells.
sudo useradd -M -s /sbin/nologin -g image git
command useradd
	-c, --comment COMMENT. Used as the field for user's fullname.
	-r, --system. Create a system account.
	-s, --shell SHELL. For default setting, refer to SHELL var in /etc/default/useradd.
	-u, --uid UID. Default is to use the smallest ID value greater than or equal to UID_MIN and greater than every other user.

	-d, --home HOMEDIR. Use HOMEDIR as the home for new user. The default way is to use BASE_DIR/LOGIN_NAME. This will not create the dir if it doesn't exist.
	-k, --skel SKEL_DIR. The skel dir contains files and dirs to be copied in the user's home dir. Only valid if -m is on. As set as SKEL var in /etc/default/useradd, it defaults to /etc/skel.
	-m, --create-home
	-M. Don't create home.
	-b, --base-dir BASE_DIR.

	-N, --no-user-group. Don't create a group with the same name as the user.
	-U, --user-group. Create a group with same name as the user.
	-g, --gid GROUP
	-G, --groups GROUP1,GROUP2,...

List all users in a group


# List all users in a group
sudo lid -g wheel

# List a user's all groups
sudo lid username

Usual commands

System Query

ls



# -S: sort by file size.
# -t, sort by modification time.
# -r, --reverse

# -h, --human-readable

# -i, --inode: print the index number of each file.

ls -lShr

inode

theGeekStuff.com: linux inodes.

An "Inode" is a data structure that stores a lot of information about a file: file size, user/group ID, mode, timestamps, link counter to determine the number of hard links, and Pointers to the blocks storing file’s contents, etc.

But, the file name is not stored in Inodes. Why? Because by doing so, we can have various file names which point to same Inode. In other words, it is for maintaining hard-links to files.


touch a
ln a a1
ls -al
# -rw-r--r-- 2 me myGroup 0 2012-01-14 16:29 a
# -rw-r--r-- 2 me myGroup 0 2012-01-14 16:29 a1
# The second entry in the output specifies number of hard links to the file, e.g. 2 in this case.
Remove file by inode number


# It will fail to remove a file with name "ab*, because of the beginning double quote.
rm "ab*

ls -i
# 1448239 "ab*
# "

# Remove it successfully!
find . -inum 1448239 -exec rm -i {} \;
Run out of inodes

A file system can run out of space in two ways :


df -i
# Filesystem            Inodes   IUsed   IFree IUse% Mounted on
# /dev/sda1            1875968  293264 1582704   16% /
# none                  210613     764  209849    1% /dev
# none                  213415       9  213406    1% /dev/shm
# none                  213415      63  213352    1% /var/run
# none                  213415       1  213414    1% /var/lock

namei


# Easily display all the permissions on a path:
namei -om /path/to/check

# -m, --modes
# -o, --owners

tree


# List contents of directories in a tree-like format:
tree /etc/systemd/system

locate

The locate command is often the simplest and quickest way to find the locations of files and directories on UNIX.


locate file1
locate "*.png"

# Suppress any error messages
locate "*.png" -q

# only 15 results
locate -n 15 "*.html"

# case-insensitive search
locate -i "*.HtmL"

sudo updatedb

# Find the DB file. It can only be read by root.
sudo locate locate.db

man


# Search in section 1 for command "ls"
man 1 ls
man 3 sprintf

Command sections

https://linux.die.net/man/
Section 1 user commands
Section 2 system calls
Section 3 library functions
Section 4 special files
Section 5 file formats
Section 6 games
Section 7 conventions and miscellany
Section 8 administration and privileged commands
Section L math library functions
Section N tcl functions

System Setting

sysctl

configure kernel parameters at runtime.


# When running ES on docker container, we need to make this bigger on linux host (e.g. Arch):
# -w, --write
# Useful for fess container.
sudo sysctl -w vm.max_map_count=262144

# -a, --all
/sbin/sysctl -a

# -n, --values: only print values, not keys.
/sbin/sysctl -n kernel.hostname # archlinux
/sbin/sysctl kernel.hostname # kernel.hostname = archlinux

/sbin/sysctl -w kernel.domainname="example.com"

# Reload
# -p[FILE], --load[=FILE]: Load in sysctl settings from the file specified or /etc/sysctl.conf if none given.
sysctl -p

# --system: Load settings from all system configuration files.
/sbin/sysctl --system --pattern '^net.ipv6'

#---------- /proc/sys

cat /proc/sys/fs/file-max
# 1219367

sysctl fs.file-max
# fs.file-max = 1219367

How do "ulimit -n" and /proc/sys/fs/file-max differ?

serverFault.com.

file-max is the maximum File Descriptors (FD) enforced on a kernel level, which cannot be surpassed by all processes without increasing. The ulimit is enforced on a process level, which can be less than the file-max.

There is no performance impact risk by increasing file-max. Modern distributions have the maximum FD set pretty high, whereas in the past it required kernel recompilation and modification to increase past 1024. I wouldn't increase system-wide unless you have a technical need.

The per-process configuration often needs tuned for serving a particular daemon be it either a database or a Web server. If you remove the limit entirely, that daemon could potentially exhaust all available system resources; meaning you would be unable to fix the problem except by pressing the reset button or power cycling. Of course, either of those is likely to result in corruption of any open files.

cat /proc/sys/fs/file-max
# 1219367

ulimit -n
# 1024
ulimit -Hn
# 4096
ulimit -Sn
# 1024

ulimit


ulimit -a

# Set open_file number
ulimit -n 65536

# "ulimit -a" can see all the options, e.g.
# core file size          (blocks, -c)
# open files                      (-n)

# https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
# To see the hard and soft values:
ulimit -Hn
ulimit -Sn

# display maximum number of open file descriptors:
cat /proc/sys/fs/file-max

System Administration

kill


# List the signal names
kill -l

Signals

linux.die.net: signals.

Common kill signals

Signal name	Signal value	Effect
SIGHUP	1	Hangup. ("signal hang up") is a signal sent to a process when its controlling terminal is closed, originally in RS-232.
SIGINT	2	Interrupt from keyboard
SIGKILL	9	Kill signal
SIGTERM	15	Termination signal. The least dangerous signal.
SIGSTOP	17,19,23	Stop the process

Note:
SIGKILL and SIGSTOP can not be caught, blocked or ignored.

Network Commands

curl


-v, --verbose # Prints the HTTP request/response headers.

-d/--data data
# (HTTP) Sends the specified data in a POST request to the HTTP server. This will cause curl to pass the data to the server using the content-type application/x-www-form-urlen-coded. Compare to -F/--form.

-F, --form name=content
Let curl emulate the form submission.
content could be:
  @file  Upload the file directly.
  <file  Read the content from the file and upload as text field.
Examples:
  curl -F [email protected]/etc/passwd www.mypass.com
  // "" must be quoted within '' or escaped by \.
  curl -F '[email protected]"localfile";filename="hehe"' url.com
# -k, --insecure. Accepts insecure SSL.
# -s, --silent
# -S, --show-error. When used with -s it makes curl show an error message if it fails.
# -L, --location. Follow the new location/direction.
curl -ksSL URL

curl -X/--request COMMAND
# Common additional HTTP requests include PUT and DELETE. Defaults to GET.

curl --digest
# (HTTP) Enables HTTP Digest authentication, which prevents the password from being sent in clear text. Use this in combination with the normal -u/--user option to set user name and password. See also --ntlm, --negotiate and --anyauth for related options.

curl -u/--user user:password

curl -H/--header HEADER # (HTTP) Extra header to use when getting a web page.

# StackOverflow.
# -d sends the Content-Type "application/x-www-form-urlencoded", as form data.
# Note: there must not be spaces in posted content.
curl -X POST -d '{"username":"admin","password":"admin"}' -H 'Content-Type:application/json' 192.168.1.1/accounts

curl --cacert CA certificate
# (SSL) Tells curl to use the specified certificate file to verify the peer. The file may contain multiple CA certificates. The certificate(s) must be in PEM format.

-s, --silent
# When output directed to a file, curl will output progress bar. This suppresses that.
-S, --show-error
# When used with -s it makes curl show an error message if it fails.

-f/--fail: In normal cases when a HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22. This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407). Wcf note: it is very important when checking curl exit value for http success code.

-O/--remote-name: Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.) The remote file name to use for saving is extracted from the given URL, nothing else.

# It is recommended to add "&& echo" after every curl command to enforce line break.
curl http://ipecho.net/plain && echo

# Use curl to get the web content from a virtual host,
# by telling it the hostname. Reference.
curl -H 'Host mydomain.com' myIP

StackOverflow.
# -g, --globoff. Switches off the "URL globbing parser", in order to prevent curl interpreting bracket letters: {}[].
# curl is trying to interpret the square brackets as a globbing pattern.
curl -g "http://192.168.1.1:12345/info?sort=[(_updated,-1)]&page=1"

Escape in curl URL


# -G is necessary! Use "-X GET" will NOT work.
# -G is to tell curl that the data are used in an HTTP GET request instead of the POST request that otherwise would be used.
curl -G -v "http://localhost/data" --data-urlencode "msg=hello world"

# Trace headers
# GET /data?msg=hello%20world HTTP/1.1

curl -G "http://127.0.0.1/python/eve" --data-urlencode 'sort=[("_updated",-1)]&page=1' && echo

wget

########## Download webpage for offline reading
# https://askubuntu.com/questions/373047/i-used-wget-to-download-html-files-where-are-the-images-in-the-file-stored
-E, --adjust-extension: Append .html to the file name if it is an HTML file but doesn't end in .html or similar
-H: Download files from other hosts, too
-k, --convert-links: After downloading convert any link in it so they point to the downloaded files
-p, --page-requisites: Download anything the page needs for proper offline viewing
# https://stackoverflow.com/questions/11124292/why-does-wget-only-download-the-index-html-for-some-websites
-e robots=off: Don't want wget to obey by the robots.txt file
-U mozilla: acts as Mozilla browser.
--random-wait: work around server downloading detection, to wait a few random seconds after every download.
wget -EHkp -e robots=off -U mozilla --random-wait http://example.com/a/

wget --random-wait -r -p -k -e robots=off -U mozilla -O index.html http://www.example.com

# Best used for testing web server.
# -q: quiet
# -O: output. -O- means output to stdout.
# https://www.vagrantup.com/intro/getting-started/provisioning.html
wget -qO- 127.0.0.1

-d, --debug

-c, --continue
	For example, wget -c ftp://a.b.com/dd.zip will first check dd.zip in pwd and continue download based on its size.
	Without -c, wget will download dd.zip.1 instead if there is already a dd.zip.

--user=user
--password=password
--ask-password
	For both ftp and http.

--local-encoding=encoding
--remote-encoding=encoding

#---------- Recursive
# https://stackoverflow.com/questions/273743/using-wget-to-recursively-fetch-a-directory-with-arbitrary-files-in-it
# -r, --recursive
# -l depth, --level=depth
# 	Specify recursion maximum depth level depth. The default maximum depth is 5.
# -np, --no-parent
#     Do not ever ascend to the parent directory when retrieving recursively.
# -nH, --no-host-directories
#     Disable generation of host-prefixed directories.  By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of directories beginning with fly.srk.fer.hr/.  This option disables such behavior.
# --cut-dirs=number
#     Ignore number directory components. "wget -r ftp://ftp.xemacs.org/pub/xemacs/" it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the -nH option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. "--cut-dirs=2" solves this.

wget -r -np -nH --cut-dirs=2 --reject="index.html*" -e robots=off https://vpnac.com/ovpnchinausers/AES-256-TCP/

-N, --timestamping

--no-remove-listing
	Don't remove the temporary .listing files generated by FTP retrievals.  Normally, these files contain the raw directory listings received from FTP servers.  Not removing them can be useful for debugging purposes, or when you want to be able to easily check on the contents of remote server directories (e.g. to verify that a mirror you're running is complete).

-m, --mirror
	It is currently equivalent to -r -N -l inf --no-remove-listing.

dhcpcd

archLinux: dhcpcd.


# If there's any problem, first kill the dhcpcd:
dhcpcd -k
# Then restart:
dhcpcd

ip

TecMint.


ip addr show
ip route show

sudo ip link set eth1 up
sudo ip link set eth1 down

sudo ip addr add 192.168.50.5 dev eth1
sudo ip addr del 192.168.50.5/24 dev eth1

# "via" denotes gateway.
sudo ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0
sudo ip route del 10.10.20.0/24
sudo ip route add default via 192.168.50.100

lokkit

About.


sudo lokkit -s http -s ssh

netstat


# -a, --all. Show both listening and non-listening sockets.
# -t: tcp, -u: udp.
# -p, --program. Show the PID and program name.
# -l: listening
# -e, --extend (display additional info).
# -n, --numeric (don't look up for host, port and user names)
netstat -tuplen
netstat -lnp # Check all the open ports.
netstat -ap | grep 192.168.1.1: #Check all connections to 192.168.1.1 and which program makes these.

# lsof: list open files.
# -i, listing of all Internet and HP-UX network files.
# -a, causes list selection options to be ANDed.
# -p s. Selects or excludes process. s could be: "123,^456", '^' means negation.
# -r, repeat. -r1 repeat every second.
lsof -i -a -p -r1 `pidof firefox`
# To list open IPv4 connections use the lsof command:
lsof -Pnl +M -i4

Both netstat and lsof are less reliable than nmap for check the network status.

nc


# Listen on localhost 60001.
nc -kl 60001

# In another computer, send Hello to the server.
echo "Hello\!" | nc serverIP 60001

# To install nc
apk add -U netcat-openbsd

Otherwise, you could open "nc -kl port" on one machine, and "nc serverIP port" on another machine, and talk to each other by inputing in stdin. A simplest chat app.

nmap


nmap -sT -O localhost

nmap -sP 192.168.0.* //扫描0网段所有ip,报告 up 的ip。ping扫
nmap   192.168.0.1-3 //扫描0.1,0.2,0.3 三台机
nmap   192.168.0.*         //扫描0网段
nmap  -p 22,23,80  ip                //扫指定端口
nmap  -p 22-80  192.168.0.*
nmap  -sU    192.168.0.*           //扫描udp
nmap  -sT    192.168.0.*            //扫描tcp
nmap  -sS    192.168.0.3           //半开式的扫描
nmap  -O    192.168.0.3            //整体信息(OS类型,端口情况。。。)
nmap  -v    192.168.0.3             //详细模式

# Check if the port is associated with the official list of known services
cat /etc/services | grep portnum

nslookup


nslookup HOSTNAME [DNS_SERVERNAME]

# "hinfo" includes more detailed host info:
nslookup -query=hinfo  -timeout=10 HOSTNAME

resolvconf

The configuration is done in /etc/resolvconf.conf and running resolvconf -u will generate /etc/resolv.conf.

pivotal.io.


#---------- /etc/resolvconf.conf:
search_domains="nono.com"
name_servers="127.0.0.1"

sudo resolvconf -u

#---------- The new /etc/resolv.conf:
# Generated by resolvconf
search nono.com hsd1.ca.comcast.net.
nameserver 127.0.0.1
nameserver 75.75.75.75
nameserver 75.75.76.76

Use drill to test


drill www5.yahoo.com
# To test Google's name servers:
drill @8.8.8.8 www5.yahoo.com
drill @127.0.0.1 www5.yahoo.com

Stop NetworkManager to modify DNS

To stop NetworkManager from modifying /etc/resolv.conf, edit /etc/NetworkManager/NetworkManager.conf and add the following in the [main] section archWiki:


dns=none

route

See route note.

siege

linux.die.net: siege.

Siege is a multi-threaded http load testing and benchmarking utility.



-C, --config: print the current configuration to $HOME/.siegerc. To set another config, `export SIEGERC=/home/jeff/haha`.

-i, --internet: INTERNET mode. Generates user simulation by randomly hitting the URLs read from the urls.txt file. This option is viable only with the urls.txt file.
-b, --benchmark: BENCHMARK mode, runs the test with NO DELAY for throughput benchmarking. By default each simulated user is invoked with at least a one second delay.

# Post
host.domain.xxx/file POST field=value&field2=value2
# Or you can POST the contents of a file using the line input operator, the "<" character:
host/file POST </home/jeff/haha.txt

ss

ss is used to dump Socket Statistics.


# Display all TCP sockets.
ss -t -a

# Display all UDP sockets.
ss -u -a

# Display all established ssh connections.
ss -o state established '( dport = :ssh or sport = :ssh )'

# Find all local processes connected to X server.
# -x, --unix: display only unix domain sockets.
ss -x src /tmp/.X11-unix/*

# List all the tcp sockets in state FIN-WAIT-1 for our apache to network 193.233.7/24 and look at their timers.
ss -o state fin-wait-1 '( sport = :http or sport = :https )' dst 193.233.7/24

# Watch all tcp connections to 192.168.1.1
watch -n 1 --difference=cumulative 'ss -est | grep 192.168.1.1'

telnet


# Should always set escapeChar so that you could quit communication by typing this char.
telnet -q escapeChar ip port

# Check if the port is open.
# If "connected", it is open. "refused", closed. "timeout", firewalled.
telnet mydomain.com portNum

tcpdump


# To print traffic between helios and either hot or ace:
tcpdump host helios and \( hot or ace \)

# To print all IP packets between ace and any host except helios:
tcpdump ip host ace and not helios

# To print all traffic between local hosts and hosts at Berkeley:
tcpdump net ucb-ether

# To print all ftp traffic through internet gateway snup: (note that the expression is quoted to prevent the shell from (mis-)interpreting the parentheses):
tcpdump 'gateway snup and (port ftp or ftp-data)'

# To print traffic neither sourced from nor destined for local hosts (if you gateway to one other net, this stuff should never make it onto your local net).
tcpdump ip and not net localnet

tcptraceroute

By sending out TCP SYN packets instead of UDP or ICMP ECHO packets, tcptraceroute is able to bypass the most common firewall filters.

It is worth noting that tcptraceroute never completely establishes a TCP connection with the destination host. If the host is not listening for incoming connections, it will respond with an RST indicating that the port is closed. If the host instead responds with a SYN|ACK, the port is known to be open, and an RST is sent by the kernel tcptraceroute is running on to tear down the connection without completing three-way handshake. This is the same half-open scanning technique that nmap uses when passed the -sS flag.


# listening for connections on port 80:
tcptraceroute webserver

tcptraceroute mailserver 25

iptables

More accurate name is iptables/netfilter。iptables is a userspace module. 作为用户,你在命令行就是通过它将防火墙规则放进缺省的表里。netfilter is a kernel module,它内置于内核中,进行实际的过滤。

iptables 将规则放进缺省的规则链(INPUT、OUTPUT 及 FORWARD),而所有流量(IP 封包)都会被相关的规则链检查,根据当中的规则判断如何处理每个封包,例如:接纳或丢弃它。这些动作称为target,而最常见的两个缺省target: DROP or ACCEPT 。

3 条缺省规则链:

iptables options:

-t table
	filter: input, forward, output.
		Default table.
	nat: prerouting, output, postrouting.
		This table is consulted when a packet that creates a new connection is encountered.
	mangle: prerouting, output, input, forward, postrouting.
		This table is used for specialized packet alteration.
	raw: prerouting, output.
		This table is used mainly for configuring exemptions from connection tracking in combination with the NOTRACK target.

Commands:
-A, --append chain rule
-D, --delete chain rule/rulenum
-I, --insert chain [rulenum] rule. If the rule number is 1, the rule or rules are inserted at the head of the chain. This is also the default if no rule number is specified.
-R, --replace chain rulenum rule
-L, --list [chain]. Usually used with -n to suppress DNS lookups. If no chain is selected, all chains are listed.
-S, --list-rules [chain]
-F, --flush [chain]. This is equivalent to deleting all the rules one by one. All the chains in the table will be removed if none is given.
-Z, --zero [chain [rulenum]]. Zero the packet and byte counters in all chains, or only the given chain, or only the given rule in a chain. It is legal to specify the -L, --list (list) option as well, to see the counters immediately before they are cleared.
-N, --new-chain chain
-X, --delete-chain [chain]
-P, --policy chain target
-E, --rename-chain oldChain newChain

Parameters
-s, --source address/mask
-d, --destination address/mask
-j, --jump target. The target can be a user-defined chain, one of the special builtin targets which decide the fate of the packet immediately, or an extension.
-g, --goto chain.
-i, --in-interface name.
-o, --out-interface name.

Other
-n, --numeric. IP and port numbers will be in numeric format.
-p, --protocol
-m, --match module.
	iptables can use extended packet matching modules. These are loaded in two ways: implicitly, when -p is specified, or explicitly with the -m; various extra command line options become available, depending on the specific module. You can specify multiple extended match modules in one line, and you can use the -h or --help options after the module has been specified to receive help specific to that module.

State module
--state state
	INVALID: the  packet could  not  be identified for some reason.
	ESTABLISHED: the packet is associated with a connection which has seen  packets  in  both  directions.
	NEW: the packet has started a new connection, or otherwise associated with a connection  which  has  not  seen packets  in both directions.
	RELATED: the packet is starting a new connection, but is associated with an existing connection, such as an FTP data transfer, or an ICMP error.

Reference. Ref on disabling iptables.


# Check if iptables installed.
rpm -q iptables

# Check if it is running as modules.
lsmod | grep ip_tables

# List all active rules.
iptables -L --line-numbers

# Start iptables
system-config-securitylevel

# Add rule.
iptables -I INPUT -p tcp --dport 5000 -j ACCEPT
iptables -A INPUT -m state NEW -p tcp --dport 2345 -j ACCEPT
iptables -I INPUT 5 -m state --state NEW -m tcp -p tcp --dport 62085 -j ACCEPT

# Blacklist IP
iptables -I INPUT -s 61.153.104.170 -j DROP

# Delete the first rule.
iptables -D INPUT 1

# Reference.

## verify new firewall settings 
/sbin/iptables -L INPUT -n -v

# Stop iptables, disable iptables
# Save newly added firewall rules, and disable iptables.
# iptables: Saving firewall rules to /etc/sysconfig/iptables.
service iptables save
service iptables stop
# If you are using IPv6 firewall, enter:
service ip6tables save
service ip6tables stop

## Open port 80 and 443 for 192.168.1.0/24 subnet only ##
/sbin/iptables -A INPUT -s 192.168.1.0/24  -m state --state NEW -p tcp --dport 80 -j ACCEPT
/sbin/iptables -A INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 443 -j ACCEPT

# Accept packets from trusted IP addresses by MAC
iptables -A INPUT -s 192.168.0.4 -m mac --mac-source 00:50:8D:FD:E6:32 -j ACCEPT
Choosing match patterns

# TCP packets from 192.168.1.2:
iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.2 [...]

# UDP packets to 192.168.1.2:
iptables -t nat -A POSTROUTING -p udp -d 192.168.1.2 [...]

# all packets from 192.168.x.x arriving at eth0:
iptables -t nat -A PREROUTING -s 192.168.0.0/16 -i eth0 [...]

# all packets except TCP packets and except packets from 192.168.1.2:
iptables -t nat -A PREROUTING -p ! tcp -s ! 192.168.1.2 [...]

# packets leaving at eth1:
iptables -t nat -A POSTROUTING -o eth1 [...]

# TCP packets from 192.168.1.2, port 12345 to 12356 to 123.123.123.123, Port 22
# (a backslash indicates contination at the next line)
iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.2 --sport 12345:12356 -d 123.123.123.123 --dport 22 [...]

# Source-NAT: Change sender to 123.123.123.123
iptables [...] -j SNAT --to-source 123.123.123.123

# Mask: Change sender to outgoing network interface
iptables [...] -j MASQUERADE

# Destination-NAT: Change receipient to 123.123.123.123, port 22
iptables [...] -j DNAT --to-destination 123.123.123.123:22

# Redirect to local port 8080
iptables [...] -j REDIRECT --to-ports 8080

Example: allow some user to login outside LAN

http://serverfault.com/questions/310459/allowgroups-and-match-address-for-ssh. This presumes you have the inside sshd listening on port 2200 and the outside sshd listening on port 2201, and that each one is using an appropriately configured sshd_config file.


# Connect inside users to "inside" sshd.
iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p tcp --dport 22 -j REDIRECT --to-ports 2200

# Connect out*emphasized text*side users to "outside" sshd.
iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p tcp --dport 22 -j REDIRECT --to-ports 2201

iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p tcp --dport 2200 -j ACCEPT
iptables -A INPUT -p tcp --dport 2201 -j ACCEPT

FAQ

Why there is an accept all rule but connections are still blocked

http://unix.stackexchange.com/questions/60953/incoming-accept-all-iptables-rule-still-appearing Type: iptables -vL instead. You will find the accept all rule is only applied to lo interface. Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 55651 48M ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 109 5255 ACCEPT icmp -- any any anywhere anywhere 1 35 ACCEPT all -- lo any anywhere anywhere

NAT

This table is consulted when a packet that creates a new connection is encountered. It consists of three built-ins: PREROUTING (for altering packets as soon as they come in), OUTPUT (for altering locally-generated packets before routing), and POSTROUTING (for altering packets as they are about to go out). In short, "PREROUTING - DNAT for incoming traffic, OUTPUT - DNAT for outgoing traffic, POSTROUTING - SNAT for outgoing traffic" Ref.

Ref. This command can be explained in the following way:

-t nat	 	select table "nat" for configuration of NAT rules.
-A POSTROUTING	 	Append a rule to the POSTROUTING chain (-A stands for "append").
-o eth1	 	this rule is valid for packets that leave on the second network interface (-o stands for "output")
-j MASQUERADE	 	the action that should take place is to 'masquerade' packets, i.e. replacing the sender's address by the router's address.

Using the MASQUERADE target every packet receives the IP of the router's outgoing interface. The advantage over SNAT is that dynamically assigned IP addresses from the provider do not affect the rule, there is no need to adopt the rule. For ordinary SNAT you would have to change the rule every time the IP of the outgoing interface changes. As for SNAT, MASQUERADE is meaningful within the POSTROUTING-chain only.


# Transparent proxying:
# (local net at eth0, proxy server at port 8080)
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 

SNAT, DNAT, masquerade

http://server.zdnet.com.cn/server/2008/0317/772069.shtml.

如下命令表示把所有10.8.0.0网段的数据包snat成192.168.5.3/192.168.5.4/192.168.5.5等几个ip然后发出去


iptables -t nat -A POSTROUTING -s 10.8.0.0/255.255.255.0 -o eth0 -j snat --to-source 192.168.5.3-192.168.5.5

如此配置的话,不用指定snat的目标ip了 不管现在eth0的出口获得了怎样的动态ip,MASQUERADE会自动读取eth0现在的ip地址然后做snat出去.


iptables -t nat -A POSTROUTING -s 10.8.0.0/255.255.255.0 -o eth0 -j MASQUERADE

User management

usermod


#-a, --append.
# Without -a, the new list will override old completely.
usermod -aG www,wheel user

id


#-g, -gid.
#-G, --groups.

#-n, --name. Print name instead of a number.
# Print username
id -nu
# Print all group names
id -nG
# Print main group name
id -ng

sudo, su


# -, -l, --login: Start the shell as a login shell with an environment similar to a real login. Clears all the environment variables except TERM, initializes the environment vars HOME/SHELL/USER/LOGNAME/PATH, changes to the target user's home directory, and sets argv[0] of the shell to '-'.
# -c, --command=command

su - mySelf -c whoami
# output: mySelf

# Note the difference between single/double quotes:
su - root -c "echo $HOME"
# /home/mySelf
su - root -c 'echo $HOME'
# /root

# -E, --preserve-env
#   Indicates to the security policy that the user wishes to preserve their existing environment variables.

If you need to use Heredoc, then you have to resort to sudo.


# -i, --login: Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile or .login will be read by the shell.

# Have to use "bash"'s dash.
sudo -u mySelf bash - <<'EOF'
nohup python \${HOME}/local.py &> ~/a.log &
EOF

sudo -u nobody whoami
# Output: nobody

Add to sudoer


# NOTE: NOPASSWD is not NOPASSWORD
echo "me  ALL=(ALL)  NOPASSWD:ALL" | sudo tee /etc/sudoers.d/me

unrar



# https://linux.die.net/man/1/unrar
# e Extract files to current directory.
# l List archive content.
# p Print file to stdout.
# t Test archive files.
# v Verbosely list archive.
# x Extract files with full path.

# wcfNote: don't use 'e', use 'x' instead.
unrar x a.rar

zip

Serverfault on zip/rar/tar preserving symbolic links.


# The * needs to be escaped once so it will not expand in shell but in zip.
# The backslash avoids the shell filename substitution, so that the name matching is performed by zip at all directory levels.
zip -r master.zip master/conf.ori master/libexec master/makeZip.sh master/sh --exclude=\*~ --exclude=\*.bak --exclude=\*.bak/\*

# To preserve symbolic links: "--symlinks", "-y"
zip -y -r foo.zip foo/
The pattern matching includes the path, and so patterns like \*.o match names that end in ".o", no matter what the path prefix is. Note that the backslash must precede every special character (i.e. ?*[]), or the entire argument must be enclosed in double quotes ("").

cp


##### Overwrite or not?
# -i, --interactive: prompt before overwrite (overrides a previous -n option).
# -n, --no-clobber: do not overwrite an existing file (overrides a previous -i option).
# -f, --force: if an existing destination file cannot be opened, remove it and try again (this option is ignored when the -n option is also used).

Perserve symbolic links

SuperUser.


# Copy symbolic links as is:
cp --preserve=links
# Or, preserve everything - even recursively - and see the output.
cp -av

mv



--backup[=CONTROL]
       make a backup of each existing destination file
-b     like --backup but does not accept an argument
-S, --suffix=SUFFIX
       override the usual backup suffix
The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX.
The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable.

If you specify more than one of -i, -f, -n, only the final one takes effect.
-f, --force
       do not prompt before overwriting
-i, --interactive
       prompt before overwrite
-n, --no-clobber
       do not overwrite an existing file

-u, --update
       move only when the SOURCE file is newer than the destination file or when the destination file is missing

rename


# fix the extension of your html files.
rename .htm .html *.htm

ls
# apache2  rails.debug  rails.debug-20170825.bz2  rails.debug-20170826  rails.debug-20170914-1505372222.bz2  rails.debug-20170914-1505372281  tomcat6
touch rails.info-20170528-1506992130.xz
rename rails. myProj. rails.*
ls
# apache2  myProj.debug  myProj.debug-20170825.bz2  myProj.debug-20170826  myProj.debug-20170914-1505372222.bz2  myProj.debug-20170914-1505372281  myProj.info-20170528-1506992130.xz  tomcat6

ldd

-v --verbose
	Print all information, including, for example, symbol versioning information.

-u --unused
	Print unused direct dependencies.  (Since glibc 2.3.4.)

-d --data-relocs
	Perform relocations and report any missing objects (ELF only).

-r --function-relocs
	Perform relocations for both data objects and functions, and report any missing objects or functions (ELF only).
ldd -r /usr/lib64/libGLw.so
#		linux-vdso.so.1 =>  (0x00007fff24ff4000)
#		libGL.so.1 => /usr/lib64/libGL.so.1 (0x00007fe57dc04000)
#		libXt.so.6 => /usr/lib64/libXt.so.6 (0x00007fe57d99e000)
#		libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007fe57d661000)
#		libc.so.6 => /lib64/libc.so.6 (0x00007fe57d2cd000)
#		libnvidia-tls.so.346.59 => /usr/lib64/tls/libnvidia-tls.so.346.59 (0x00007fe57d0c9000)
#		libnvidia-glcore.so.346.59 => /usr/lib64/libnvidia-glcore.so.346.59 (0x00007fe57a3ee000)
#		libXext.so.6 => /usr/lib64/libXext.so.6 (0x00007fe57a1dc000)
#		libdl.so.2 => /lib64/libdl.so.2 (0x00007fe579fd7000)
#		libSM.so.6 => /usr/lib64/libSM.so.6 (0x00007fe579dcf000)
#		libICE.so.6 => /usr/lib64/libICE.so.6 (0x00007fe579bb3000)
#		libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007fe579994000)
#		/lib64/ld-linux-x86-64.so.2 (0x000000332f200000)
#		libm.so.6 => /lib64/libm.so.6 (0x00007fe579710000)
#		libuuid.so.1 => /lib64/libuuid.so.1 (0x00007fe57950c000)
#		libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007fe579308000)
#	undefined symbol: xmPrimitiveClassRec	(/usr/lib64/libGLw.so)
#	undefined symbol: _XmStrings	(/usr/lib64/libGLw.so) 

host

host is a simple utility for performing DNS lookups. It is normally used to convert names to IP addresses and vice versa.


# Reference
perror 13 # Check the error code by OS. Useful in mysql error.
# Result: OS error code  13:  Permission denied

make -j jobNumber # run in parallel.
nohup command & # Run command in background, ignoring hangup signals.

tmux # A VNC for text terminals.

lsb_release -a # Show the OS version. LSB: Linux Standard Base.

# Query installed packages in CentOS.
rpm -qa *httpd*
# List package files
rpm -ql iptables
# Show more package infor.
rpm -qi iptables

wall

This is an acronym for "write all," i.e., sending a message to all users at every terminal logged into the network. It is primarily a system administrator's tool.
If write access to a particular terminal has been disabled with mesg, then wall cannot send a message to that terminal.


wall System going down for maintenance in 5 minutes!

chvt

Ref.

chmod

Sticky bit

Reference.


chmod o+t /home/temp
chmod 1755 /home/temp

# After Sticky Bit set, the x bit in others field becomes 't'.
ls -l
# -rwxr-xrwt 1 xyz xyzgroup 148 Dec 22 03:46 /home/temp
#		   ^

# Find all the sticky bit set files
find / -perm +1000

Note: if 't' is 'T', it means others have no execution permission. Use chmod o+x dirpath, and you will find 't' back.

SGID

Yale provides a good reference (here).


# I am in a primary group "staff".
id	# uid=me gid=staff groups=staff,work,...

mkdir myWorks
ls -ld myWorks	# The dir "myWorks" will have uid=me, gid=staff.

# Option 1.
# If I want to share it to "work" group, I need
chgrp work myWorks
chmod 775 myWorks
ls -ld myWorks	# "myWorks" now has gid=work.
touch myWorks/wcf1	# "wcf1" still has gid=staff.

# Option 2.
# A better solution is to set SGID.
chmod g+s myWorks
ls -ld myWorks	# "myWorks": drwxrwsr-x
touch myWorks/wcf	# "wcf1" has gid=work, same as its parent dir.
ftp a file into an SGID directory? -- It inherits the GID of the directory, as above.
mv a file into an SGID directory? -- It keeps its current GID.
cp a file into an SGID directory? -- It inherits the GID of the directory.
mkdir inside an SGID directory? -- It inherits the GID of the enclosing directory and is also marked SGID.

Reference.


chmod g+s file1.txt
chmod 2750 file1.txt

SUID

Ref. SUID: Set User ID. SGID: Set Group ID.

chmod u+s filename
chmod u-s filename
chmod g+s filename
chmod g-s filename

除了一般的user id 和group id外,还有两个称之为effective 的id,就是有效id:uid,gid,euid,egid。内核主要是根据euid和egid来确定进程对资源的访问权限。
一个进程如果没有SUID或SGID位,则euid=uid egid=gid,分别是运行这个程序的用户的uid和gid。例如kevin用户的uid和gid分别为204和202,foo用户的uid和gid为200,201,kevin运行myfile程序形成的进程的euid=uid=204,egid=gid=202,内核根据这些值来判断进程对资源访问的限制,其实就是kevin用户对资源访问的权限,和foo没关系。
如果一个程序设置了SUID,则euid和egid变成被运行的程序的所有者的uid和gid,例如kevin用户运行myfile,euid=200,egid=201,uid=204,gid=202,则这个进程具有它的属主foo的资源访问权限。
SUID的作用就是这样:让本来没有相应权限的用户运行这个程序时,可以访问他没有权限访问的资源。passwd就是一个很鲜明的例子。
SUID的优先级比SGID高,当一个可执行程序设置了SUID,则SGID会自动变成相应的egid。

下面讨论一个例子:
UNIX系统有一个/dev/kmem的设备文件,是一个字符设备文件,里面存储了核心程序要访问的数据,包括用户的口令。所以这个文件不能给一般的用户读写,权限设为:cr--r----- 1 root system 2, 1 May 25 1998 kmem
但ps等程序要读这个文件,而ps的权限设置如下:
-r-xr-sr-x 1 bin system 59346 Apr 05 1998 ps
这是一个设置了SGID的程序,而ps的用户是bin,不是root,所以不能设置SUID来访问kmem,但大家注意了,bin和root都属于system组,而且ps设置了SGID,一般用户执行ps,就会获得system组用户的权限,而文件kmem的同组用户的权限是可读,所以一般用户执行ps就没问题了。但有些人说,为什么不把ps程序设置为root用户的程序,然后设置SUID位,不也行吗?这的确可以解决问题,但实际中为什么不这样做呢?因为SGID的风险比SUID小得多,所以出于系统安全的考虑,应该尽量用SGID代替SUID的程序,如果可能的话。下面来说明一下SGID对目录的影响。SUID对目录没有影响。如果一个目录设置了SGID位,那么如果任何一个用户对这个目录有写权限的话,他在这个目录所建立的文件的组都会自动转为这个目录的属主所在的组,而文件所有者不变,还是属于建立这个文件的用户。

因为设置了 SUID 位的程序如果被攻击(通过缓冲区溢出等方面),那么hacker就可以拿到root权限。因此在安全方面特别要注意那些设置了SUID的程序。
通过以下的命令可以找到系统上所有的设置了suid的文件:

[[email protected] /]# find / -perm -04000 -type f -ls
对于这里为什么是4000,大家可以看一下前面的st_mode的各bit的意义就明白了。

The permission of the setuid helper is not correct.


find / -perm -04000 -type f 2>/dev/null
# /lib64/dbus-1/dbus-daemon-launch-helper

File manipulation

touch

StackExchange.



-a     change only the access time
-m     change only the modification time
-c, --no-create
      do not create any files

#---------- -t, -d
# -t STAMP
#   use [[CC]YY]MMDDhhmm[.ss] instead of current time
# 
# -d, --date=STRING
#   parse STRING and use it instead of current time.

touch -d 20120101 goldenfile
ls -l goldenfile 
# -rw-rw-r--. 1 saml saml 0 Jan  1  2012 goldenfile

#---------- -r
# -r, --reference=FILE
#   use this file's times instead of current time.

touch -d 20120101 goldenfile

touch newfile
ls -l newfile 
# -rw-rw-r--. 1 saml saml 0 Mar  7 09:06 newfile

touch -r goldenfile newfile 
ls -l goldenfile newfile
# -rw-rw-r--. 1 saml saml 0 Jan  1  2012 newfile
# -rw-rw-r--. 1 saml saml 0 Jan  1  2012 goldenfile

Text manipulation

awk


ps axu | grep '[j]boss' | awk '{print $5}'
# or you can ditch the grep altogether since awk knows about regular expressions:
ps axu | awk '/[j]boss/ {print $5}'

# Quotes need to be escaped.
awk "BEGIN { print \"Don't Panic.\" }"

# Print all lines containing 'a'.
# \$0 matchs whole line.
# /regexp/.
awk '/a/ {print \$0}' inFile
# The same.
awk '/a/' marks.txt

# Print columns 3 and 4.
awk '{print \$3 "\t" \$4}' inFile

function col {
	awk -v col=\$1 '{print col}' inFile
}
# Extract 2nd column.
col 2

awk '{ print $0 }' /etc/passwd 
# root:x:0:0:root:/root:/bin/bash
# daemon:x:1:1:daemon:/usr/sbin:/bin/sh
awk -F":" '{ print $1 " " $3 }' /etc/passwd 
# root 0
# daemon 1

Count elements in xml:


awk -v elem=\$2 'BEGIN{
    totalElem=0
}
elem {
    m = split($0,a,elem) # or m=gsub(elem,"")
    totalElem+=m-1
}
END{
    print "Total " elem ": " totalElem
}
' file

awk regexp


echo aaaabcd | awk '{ sub(/a+/, "<A>"); print }'
# <A>bcd

# Prints the second field of each record where the string ‘li’ appears anywhere in the record:
awk '/li/ { print $2 }' mail-list

# The following is true if the expression exp matches regexp:
exp ~ /regexp/
# This example matches/selects all input records with the uppercase letter ‘J’ somewhere in the first field:
awk '$1 ~ /J/' inventory-shipped

# True if the expression exp does not match regexp:
exp !~ /regexp/

# An example file: marks.txt.
# 1)    Amit     Physics    80
# 2)    Rahul    Maths      90
# 3)    Shyam    Biology    87
# 4)    Kedar    English    85
# 5)    Hari     History    89

# Count.
awk '/a/{++cnt} END {print "Count = ", cnt}' marks.txt
# Count = 4

# Print lines longer than 18.
awk 'length(\$0) > 18' marks.txt

Built-in variables

theGeekStuff: 8 powerful awk built in variables.

RF, NF

NR: number of record (line). NF: number of fields.



cat student-marks
# Jones 2143 78 84 77
# Gondrol 2321 56 58 45
# RinRao 2122 38 37
# Edwin 2537 78 67 45
# Dayan 2415 30 47

awk '{print NR,"->",NF}' student-marks
# 1 -> 5
# 2 -> 5
# 3 -> 4
# 4 -> 5
# 5 -> 4

awk '{print $NF}' test.txt
# $NF will print the last field's content.
# 77
# 45
# 37
# 45
# 47
FS, OFS

FS: Input field separator. OFS: Output field separator.



########## FS, or -F

cat etc_passwd.awk
# BEGIN{
# FS=":";
# print "Name\tUserID\tGroupID\tHomeDirectory";
# }
# {
# 	print $1"\t"$3"\t"$4"\t"$6;
# }
# END {
# 	print NR,"Records Processed";
# }

awk -f etc_passwd.awk /etc/passwd
# Name    UserID  GroupID        HomeDirectory
# gnats	41	41	/var/lib/gnats
# libuuid	100	101	/var/lib/libuuid
# syslog	101	102	/home/syslog
# hplip	103	7	/var/run/hplip
# avahi	105	111	/var/run/avahi-daemon
# saned	110	116	/home/saned
# pulse	111	117	/var/run/pulse
# gdm	112	119	/var/lib/gdm
# 8 Records Processed

########## OFS

awk -F':' 'BEGIN{OFS="=";} {print $3,$4;}' /etc/passwd
41=41
100=101
101=102
103=7
105=111
110=116
111=117
112=119
RS, ORS

RS: Record Separator. ORS: Output Record Separator.

FILENAME

Name of the current input file.

FNR

FNR: Number of Records relative to the current input file.

cut

cut: cut out selected portions of each line of a file.

-s	Suppress lines with no field delimiter characters. Very useful.
-f list		Means the list specifies field position.
-d delim	Field delimiter.

-b list		The list specifies byte positions.
-c list		The list specifies char positions.

How to use space as a delimiter with cut?

http://stackoverflow.com/questions/816820/use-space-as-a-delimiter-with-cut-command.


cut -d' ' -f2 

grep

Exit status:

0     One or more lines were selected.
1     No lines were selected.
more than 1    An error occurred.
-v, --invert-match
	Selected lines are those not matching any of the specified patterns. E.g. cat /etc/passwd | grep -v nologin (Find all users with shells).

-c, --count
	Only a count of selected lines is written to standard output.
-B num, --before-context=num
	Print num lines of leading context before each match.  See also the -A and -C options.

--colour=[when, --color=[when]]
	Mark up the matching text with the expression stored in GREP_COLOR environment variable.  The possible values of when can be `never', `always' or 'auto'.
-n, --line-number
	Each output line is preceded by its relative line number in the file, starting at line 1.  The line number counter is reset for each file processed.  This option is ignored if -c, -L, -l, or -q is specified.

-E, --extended-regexp
	Interpret pattern as an extended regular expression (i.e. force grep to behave as egrep).

-H, --with-filename
  Always print filename headers with output lines.
-h, --no-filename
  Never print filename headers (i.e. filenames).

-I	Ignore binary files.  This option is equivalent to --binary-file=without-match option.

-i, --ignore-case
	Perform case insensitive matching.  By default, grep is case sensitive.

-R, -r, --recursive
	Recursively search subdirectories listed.

-w, --word-regexp
  The expression is searched for as a word (as if surrounded by `[[:<:]]' and `[[:>:]]'; see re_format(7)).
-x, --line-regexp
  Only input lines selected against an entire fixed string or regular expression are considered to be matching lines.

http://unix.stackexchange.com/questions/21764/grep-or-regex-problem.


# Kill the rake process
ps aux | grep rake | grep -v grep | awk '{print $2}'

printf


# Print "aωb" in UTF-8:
printf 'a\xCF\x89b' > binary

sed

sed: Stream Editor.

sed [-Ealn] command [file ...]
sed [-Ealn] [-e command] [-f command_file] [-i extension] [file ...]

-E: Interprete as extended/modern regexp rather than Basic Regular Expressions (BRE's).
-l: Make output line buffered.

-e command: Append the editing commands to the list of commands.
-f command_file: Append the editing commands from the file, which should be line-seperated.

-i extension: Edit file in-place, saving backups with extension.

-n: Suppress echoing.

[2addr]s/regular expression/replacement/flags
       |_ This 's' means substitute.

flags:
  N(number)  Only substitute the N'th occurrence.
  g  Global, not just the first one.
  p  Print. Even with -n.
  w file  Append to file.

Sed Addresses:
  Command with no addr: selects every pattern.
  With 1 addr: select the pattern match the addr.
  With 2 addrs: select the pattern match the range [addr1, addr2].
It could be:
  A number. This is the line number. Starts from 1.
  $. The last line of input.
  Context addr. Format: /regexp/. See examples below.

BrunoLinux.



# command s: substitute.
# NOTE: sed excute this line by line.
echo xyne.archlinux.ca | sed 's/\./@/'
# [email protected]

# flag g: Apply the replacement to all matches to the REGEXP, not just the first.
echo xyne.archlinux.ca | sed 's/\./@/g'
# [email protected]@ca

# Only print those lines with "John"
# -n, --quiet: suppress automatic printing of pattern space.
# When the "-n" option is used, the "p" flag will cause the modified line to be printed.
sed -n '/John/p' songs.txt > johns.txt

for fl in *.php; do
	mv $fl $fl.old
	sed 's/FINDSTRING/REPLACESTRING/g' $fl.old > $fl
	rm -f $fl.old
done

sed s/day/night/ old > new
sed s/day/night/ <old > new
# Use _ as a delimiter.
sed 's_/usr/local/bin_/common/bin_' <old >new

# & to match string.
echo "123 abc" | sed 's/[0-9]*/& &/'
# 123 123 abc

# \1 to keep part of the pattern
echo abcd123 | sed 's/\([a-z]*\).*/\1/'
# abcd

# Insert a line to the start of a file.
sed -i '1s/^/line to insert\\n/' path/to/file/you/want/to/change.txt

# 1addr. Delete first number in line 3.
sed '3 s/[0-9][0-9]*//' <file >new
# 2addrs. 1st-100th lines.
sed '1,100 s/A/a/'

# Delete the first number on all lines starting with a '#'.
sed '/^#/ s/[0-9][0-9]*//'
# Same as before, buth with ',' as separator.
sed '\,^#, s/[0-9][0-9]*//'

Note that you have to use "escape-signs" ( \ ) if there are slashes in the text you want to replace, so as an example: 's/www.search.yahoo.com\/images/www.google.com\/linux/g' to change www.search.yahoo.com/images to www.google.com/linux.

tail


head -1 project.info-20170830 |cut -f1 -d" "
# 2017-08-24T19:54:54+00:00
tail -1 project.info-20170830 |cut -f1 -d" "
# 2017-08-30T10:34:55+00:00
-c, --bytes=K
	output the last K bytes; alternatively, use -c +K to output bytes starting with the Kth of each file.

-f, --follow[={name|descriptor}]
	output appended data as the file grows; -f, --follow, and --follow=descriptor are equivalent.
-F:	same as --follow=name --retry

-n, --lines=K
	output the last K lines, instead of the last 10; or use -n +K to output lines starting with the Kth.

--max-unchanged-stats=N
	with --follow=name, reopen a FILE which has not changed size after N (default 5) iterations to see if it has been unlinked or renamed (this is the usual case of rotated log files).  With inotify, this option is rarely useful.

--pid=PID
	with -f, terminate after process ID, PID dies

-q, --quiet, --silent: never output headers giving file names

--retry
	keep trying to open a file even when it is or becomes inaccessible; useful when following by name, i.e., with --follow=name
-s, --sleep-interval=N
	with -f, sleep for approximately N seconds (default 1.0) between iterations. With inotify and --pid=P, check process P at least once every N seconds.

wc

wc
word, line, character, and byte count.
-c	Number of bytes.
-l	Number of lines.
-m	Number of chars in input file.
-w	Number of words in input file.

tr

tr: translate or delete char.

tr OPTION set1 [set2]
-c, -C, --complement. Use the complement of set1.
-d, --delete.
-s, --squeeze-repeats. Replace each input sequence of a repeated character that is listed in SET1 with a single occurrence of that character.
-t, --truncate-set1. First truncate SET1 to length of SET2.

SETs:
\NNN	Octal value.
For more, see man tr.

# Remove all non-alphabet from openssl-generated string.
openssl rand -base64 10 | tr -dc 'a-zA-Z'

Identifying and removing null characters in UNIX

Ref.

tr -d '\000'

xargs



#---------- Exit status
# Suppose README.md has "IAmIn" in it.

echo "README.md" | xargs grep IAmIn
# IAmIn
echo $?
# 0

echo "README.md" | xargs grep hehe
echo $?
# 1

# But this one:
echo "" | xargs egrep -w --color --with-filename -n "hehe"
echo $?
# 0

# -x: Force xargs to terminate immediately if a command line containing number arguments will not fit in the specified (or default) command line length.
# -I replstr: Execute utility for each input line, replacing one or more occurrences of replstr in up to replacements (or 5 if no -R flag is specified) arguments to utility with the entire line of input.
seq 20 | xargs -Iz echo "Hi there"
seq 20 | xargs -Iz echo "Hi there z" # Here 'z' will be 1..20.

Others

bc

bc - An arbitrary precision calculator language.



-i, --interactive

-l, --mathlib. Define the standard math library.
# Math library:
# s(x) - sin. c(x) - cos. a(x) - arctan. l(x) - logrithm. e(x) - natural. j(n,x) - Bessel function.

pi=$(echo "scale=10; 4*a(1)" | bc -l)

diff



# -y, side by side
diff a b -y

-l, --paginate: pass output through `pr' to paginate it
-d, --minimal: try hard to find a smaller set of changes
--suppress-common-lines

-i, --ignore-case
-w, --ignore-all-space: ignore all white space
-B, --ignore-blank-lines
-I, --ignore-matching-lines=RE: ignore changes whose lines all match RE

# comparing files with comment-only changes
diff -u -I '#.*' test{1,2}
# wcf's solution: \s for whitespace.
diff -u -I "^\s*#.*" test{1,2}

-W: width. Default is 130. Used together with -y.

diff -dwBy -W 200 --suppress-common-lines syslog-ng.conf.*

# Compare only the first line or the last 1000 lines:
diff <(head -n 1 file1) <(head -n 1 file2)
diff <(tail -n 1000 messages) <(tail -n 1000 syslog)

echo


# -n  do not output the trailing newline
# -e  enable interpretation of backslash escapes
# -E  disable interpretation of backslash escapes (default)

a="m
n
"

# What will be the difference?
echo $a
echo "$a"

mail

When turned on, mail prints out a one line header of each message found. The current message is initially the first message (numbered 1).

? : help.

p, print: see email.
t, type: same as print.

# https://unix.stackexchange.com/questions/26790/what-is-mail-and-how-is-it-navigated
h: reprints the current screenful
z: show the next screenful
z-: show the previous screenful.

+/-, NUMBERs: move among msg lists, like in ed.

d, delete. "delete 1 2" deletes messages 1 and 2. "delete 10-50" deletes a range of mails.
dp or dt. Deletes the current message and prints the next message.
u, undelete.

r, reply.
x, exit.

Status

new, read, old

timeout

"timeout [OPTION] DURATION COMMAND": Start COMMAND, and kill it if still running after DURATION.

DURATION is a floating point number with an optional suffix: 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.


-k, --kill-after=DURATION
  also send a KILL signal if COMMAND is still running this long after the initial signal was sent.

watch

Execute a program periodically (by default 2s), showing output fullscreen.


watch [-dhvt] [-n seconds] [--differences[=cumulative]] [--help] [--interval=seconds] [--no-title] [--version] command

# -n seconds: repeat every seconds s. Defaults to 2s.
# -d, --difference: highlight the differences between successive updates.
# -c, --color: Interpret ANSI color sequences.

# By default, watch will run until interrupted.
# -e, --errexit: Freeze updates on command error, and exit after a key press.
# -g, --chgexit: Exit when the output of command changes.

watch -dcg ls -lrt
watch -dcg -n 1 ss -t

md5sum


md5sum file

mktemp


# -u, --dry-run
mktemp -u /tmp/wcf.XXX
# /tmp/wcf.nwD

bash


# -s. Read commands from the standard input. It allows the positional parameters to be set when invoking an interactive shell.
# --. Signals the end of options and disables further option processing.
curl -L https://chef.io/chef/install.sh | sudo bash -s -- -P chefdk

# The following -k, -j, -l options are all for "bootstrap.sh", which is a chef script.
curl -ksSL https://github.com/me/proj/raw/master/bootstrap.sh | devEnv=false bash -s -- -k -j '{}' -l "recipe[directory::development]"
# Install chefdk

# -l, --login

# All of the  single-character shell options documented in the description of the set builtin command can be used as options when the shell is invoked. So "-e" is same as in "set -e":
bash -le "script_to_run"

ps

binaryTides: linux ps command.


# "-u" is used to show process of that user. But "u" means show detailed information.
# BSD style
ps ax
# UNIX style
ps -ef

# The two are same
ps aux
ps -ef -f

# Display process by user
ps -f -u www-data

# Search the processes by their name or command use the "-C" option followed by the search term.
ps -C apache2
# Display processes by process id
ps -f  -p 3150,7298,6544

# Sort process by cpu or memory usage
# "-" or "+" symbol indicating descending or ascending
ps aux --sort=-pcpu,+pmem

# Display process hierarchy in a tree style
ps -f --forest -C apache2

# Display child processes of a parent process
ps -o pid,uname,comm -C apache2

# Search by parent-pid
ps --ppid 2359

# Display threads of a process
ps -p 3150 -L

# Display elapsed time of processes
ps -e -o pid,comm,etime

# Turn ps into an realtime process viewer, like top?
watch -n 1 'ps -e -o pid,uname,cmd,pmem,pcpu --sort=-pmem,-pcpu | head -15'

ps -Af | grep ...

screen

What is screen for? See superuser.com: tmux vs screen.


screen emacs prog.c

-ls [match]
-list [match]
  does not start screen, but prints a list of pid.tty.host strings identifying your screen sessions.  Sessions marked `detached' can be resumed with "screen -r".  Those  marked  `attached' are  running and have a controlling terminal. If the session runs in multiuser mode, it is marked `multi'. Sessions marked as `unreachable' either live on a different host or are `dead'. An unreachable session is considered dead, when its name matches either the name of the local host, or the specified parameter, if any.  See the -r flag for a  description  how  to  con- struct matches.  Sessions marked as `dead' should be thoroughly checked and removed.  Ask your system administrator if you are not sure. Remove sessions with the -wipe option.

-S sessionname
  When creating a new session, this option can be used to specify a meaningful name for the session. This name identifies the session for "screen -list" and "screen -r" actions. It substitutes the default [tty.host] suffix.

-t name
  sets the title.

Commands

gnu.org: Session management.

rackaid.com: linux screen.

A good blog on linux screen. Recommended by wcf.

help
  Ctrl-a ?

##### Windows

Create a window
  Ctrl-a c
Switching Between Windows
  Ctrl-a n/p
List windows
  Ctrl-a w
Navigate through windows
  Ctrl-a #
Navigate back and forth
  Ctrl-a Ctrl-a
Kill windows
  C-a k

detach
  C-a d or C-a C-d
retach
  screen -r
lockscreen
  C-a x or C-a C-x
  Call a screenlock program (/local/bin/lck or /usr/bin/lock or a builtin, if no other is available). Screen does not accept any command keys until this program terminates.

Logging Your Screen Output
  Ctrl-a H

Getting Alerts if there is activity
  Ctrl-a M
Getting Alerts if there is no more output
  Ctrl-A _

pow_detach
(C-a D D)
  Mainly the same as detach, but also sends a HANGUP signal to the parent process of screen.

##### Rename session
# https://blog.onetwentyseven001.com/linux-screen/#.WbIGq9Og9E4

Ctrl-a :sessionname mySessionName
  Rename the current session. ':' is used to specify a command, and "sessionname" is a command.
screen -S oldSessionName -X sessionname newSessionName
  Rename the session without attaching to it.

suspend
  C-a z or C-a C-z

quit
  C-a C-\

Screen with emacs

Emacs uses ‘C-a’ for ‘beginning-of-line’. It is also the command key for GNU Screen, which causes a problem of “muscle memory impedance matching.”

Suggestions for Command key Redefinition (.screenrc):


#
## Control-^ (usually Control-Shift-6) is traditional and the only key not used by emacs
escape ^^^^
#
## do not trash BackSpace, usually DEL
bindkey -k kb
bindkey -d -k kb
#
## do not trash Delete, usually ESC [ 3 ~
bindkey -k kD
bindkey -d -k kD

tmux

See also "screen".

wiki.archlinux.org: tmux.

By default, command key bindings are prefixed by Ctrl-b.

robots.thoughtbot.com provides a good reference for window/pane.


##### window

Ctrl-b l (Move to the previously selected window)
Ctrl-b w (List all windows)
Ctrl-b window_number (the default bindings are from 0 – 9)
Ctrl-b f window_name (Search for window name)
Ctrl-b w (Select from interactive list of windows)

tmux new-window (prefix + c)
tmux select-window -t :0-9 (prefix + 0-9)
tmux rename-window (prefix + ,)

##### pane

Ctrl-b q  (Show pane numbers, when the numbers show up type the key to goto that pane)

tmux split-window (prefix + ") 
  splits the window into two vertical panes
tmux split-window -h (prefix + %)
  splits the window into two horizontal panes

tmux swap-pane -[UDLR] (prefix + { or })
  swaps pane with another in the specified direction
tmux select-pane -[UDLR]
  selects the next pane in the specified direction
tmux select-pane -t :.+
  selects the next pane in numerical order

##### Copy mode

Enter copy mode: Ctrl-b [
To quit: vi mode - q. emacs mode - Esc.

##### Helpful

tmux list-keys
tmux list-commands

tmux info
  lists out every session, window, pane, its pid, etc.

tmux source-file ~/.tmux.conf
  reloads the current tmux configuration (based on a default tmux config)

Change prefix

In ~/.tmux.conf

unbind C-b
set -g prefix C-^
# set -g prefix C-a
# set -g prefix m-'\' # Meta-\
bind C-^ send-prefix

How to scroll in tmux

SuperUser.com.

"Ctrl-b [", then you can use your normal navigation keys to scroll around (eg. Up Arrow or PgDn). Press q to quit scroll mode.

Alternatively you can press "Ctrl-b PgUp" to go directly into copy mode and scroll one page up (which is what it sounds like you will want most of the time).

In Emacs mode, you can use "M-Down/Up", e.g. emacs bindings.

tee

Read from standard input and write to standard output and files.

SO: sudo tee.

SO: multiple lines.



# -a, --append

# Make sure to avoid quotes inside quotes.
echo 'deb blah ... blah' | sudo tee -a /etc/apt/sources.list

# wcfNote: tee writes to stdout AND a file, so here we add ">/dev/null" to tee to suppress stdout.
# To avoid printing data back to the console:
echo 'deb blah ... blah' | sudo tee -a /etc/apt/sources.list > /dev/null

sudo tee -a /etc/profile.d/maven.sh > /dev/null << EOL
export M2_HOME=/opt/apache-maven-3.1.1
export M2=\$M2_HOME/bin
PATH=\$M2:\$PATH
EOL

xclip



# Put your uptime in the X selection. Then middle click in an X application to paste.
uptime | xclip

# Put the contents of the selection into a file.
xclip -o > helloworld.c

# Middle click in an X application supporting HTML to paste the contents of the given file as HTML.
xclip -t text/html index.html

pkg-config


gcc my-program.c $(pkg-config --cflags --libs x11) -o my-program

alternatives

It is intended to be a drop in replacement for Debian's update-dependencies script. It is possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time.

top

Interactive commands (Linux). For MAC OS, it may differ.

1:	Toggle_Single/Separate_Cpu_States. Useful in multi-core machine.
t:	Toggle_Task/Cpu_States.
m:	Toggle_Memory/Swap_Usage.

u/U:	Select a user. Only shows the user's processes.
k:	Kill a task. Give a PID and a signal. The default signal is SIGTERM. Type 0 for signal will abort the kill.
r:	Renice_a_task. Positive value makes PID lose priority, so use negative value to bring up an urgent PID.
W:	Write_the_configuration_file. Save your options, toggles and modes. So you could return next time opening top.

x:	Column_Highlight_toggle.
y:	Row_Highlight_toggle.
z:	Color/Monochrome_toggle.

f:	Fields_select.
o:	Order_fields.
If the field letter is upper case the field will then be shown as part of the task display (screen width permitting). This will also be indicated by a leading asterisk (’*’).

F/O:	Select_Sort_Field. These keys display a separate screen where you can change which field is used as the sort column.
R:	Reverse/Normal_Sort_Field_toggle.
H:	Threads_toggle.

When runnging the top command, press f then j to display the P column (last CPU used by process).


# Reports USED (sum of process rss and swap total count) instead of VIRT
top -m

Use top -H to show top with threads info.

CPUs

us	Percentage of CPU time spent in user space.
sy	Percentage of CPU time spent in kernel space.
ni	Percentage of CPU time spent on low priority processes.
id	Percentage of CPU time spent idle.
wa	Percentage of CPU time spent in wait (on disk).
hi	Percentage of CPU time spent handling hardware interrupts.
si	Percentage of CPU time spent handling software interrupts.

dd

SO.


dd if=/dev/zero of=testFile bs=1M count="size in megabytes"

df, du

Why df shows more usage than du

Ref. df显示的已使用磁盘占用率比du 统计出来的结果要大很多。原因,主要是由于两者计算结果的方式不同。Use lsof | grep deleted to see those processes holding the file spaces.

last

ServerFault: who shutdown my CentOS?.

# To see the last reboot
last reboot | head -1

# For shutdown info
last -x|grep shutdown

lsof

List open files. See which process is using it.


# -n: This  option  inhibits the conversion of network numbers to host names for network files.  Inhibiting conversion may make lsof run faster. It is also useful when host name lookup is not working properly.
lsof -n

run-parts

man.cx.

# If −−lsbsysinit option is not given, then the names must consist entirely of upper and lower case letters, digits, underscores, and hyphens.
# −−test: print the names of the scripts which would be run, but don’t actually run them.
# −−list: print the names of the all matching files (not limited to executables), but don’t actually run them. This option cannot be used with --test.
# −−reverse: reverse the scripts’ execution order.
# By default, files are run in the lexical sort order of their names.

run-parts /etc/cron.daily

make

-f makefile

modprobe

Intro

lokkit

Reference.

-n, --nostart
	Configure firewall but don't activate the new conf.
--default=type
	Set firewall default type: server, desktop.
--list-services
--list-icmp-types
	List supported icmp types.
--enabled
--disabled
-s service
	Open the firewall for a service, e.g. ssh.

scons

scons is short for Software CONStruction tool.

scons platform=linux-gcc # Compile with gcc.

Hardware related

blkid, exfatlabel

# Give the drive a label.
exfatlabel /dev/sdc1 EXTERNAL
# Use the label in fstab.
nano /etc/fstab
# LABEL=EXTERNAL   /mnt/external    exfat-fuse *options*   0 0

blkid /dev/sdb3
# /dev/sdb3: LABEL="LRS_ESP" UUID="D8B5-D4E8" TYPE="vfat"

Time

Time zones

UTC (Coordinated Universal Time)

Ref: 鸟哥私房菜. 由於計時的方式不同,UTC 時間與 GMT 時間有差不多 16 分鐘的誤差呢!

Hardware clock

在我們的身邊就有很多的原子鐘,例如石英表,還有電腦主機上面的 BIOS 內部就含有一個原子鐘在紀錄與計算時間的進行吶!不過由於原子鐘主要是利用計算晶片 (crystal) 的原子震盪週期去計時的,這是因為每種晶片都有自己的獨特的震盪週期之故。 然而因為這種晶片的震盪週期在不同的晶片之間多多少少都會有點差異性,甚至同一批晶片也可能會或多或少有些許的差異 (就連溫度也可能造成這樣的誤差呢),因此也就造成了 BIOS 的時間會三不五時的給他快了幾秒或者慢了幾秒。

Statum

Network Time Protocol (NTP)
其實 NTP 的階層概念與 DNS 很類似啦,當你架設一部 NTP 主機,這部 NTP 所向上要求同步化的那部主要主機為 stratum-1 時,那麼你的 NTP 就是 stratum-2 囉!舉例來說,如果我們的 NTP 是向台灣的 tock.stdtime.gov.tw 這部 stratum-2 的主機要求時間同步化,那我們的主機即為 stratum-3 ,如果還有其他的 NTP 主機向我們要求時間同步, 那麼該部主機則會是 stratum-4 啦!就這樣啊~ 那最多可以有幾個階層?最多可達 15 個階層喔!

Configure files

See example below.

[[email protected] ~]# date
Thu Jul 28 15:08:39 CST 2011
重點是 CST 這個時區喔!

[[email protected] ~]# cat /etc/sysconfig/clock
ZONE="America/New_York"

[[email protected] ~]# cp /usr/share/zoneinfo/America/New_York /etc/localtime
[[email protected] ~]# date
Thu Jul 28 03:09:21 EDT 2011
時區與時間都改變了!

date

# Set the time.
date MMDDhhmmYYYY
#	MM:月份, DD:日期, hh:小時, mm:分鐘, YYYY:西元年

date
# Tue Jun 30 17:11:31 CST 2015

# %H     hour (00..23), thus prefered than %k.
# %I     hour (01..12)
# %k     hour ( 0..23)
# %l     hour ( 1..12)
# %m     month (01..12)
# %M     minute (00..59)

date +%Y%m%d%H%M%S
# 20150630171121

hwclock

/sbin/hwclock [-rw]
-r :亦即 read ,讀出目前 BIOS 內的時間參數;
-w :亦即 write ,將目前的 Linux 系統時間寫入 BIOS 當中啊!

[[email protected] ~]# date; hwclock -r
Thu Jul 28 16:34:00 CST 2011
Thu 28 Jul 2011 03:34:57 PM CST  -0.317679 seconds
# 看一看,是否剛好差異約一個小時啊!這就是 BIOS 時間!

[[email protected] ~]# hwclock -w; hwclock -r; date
Thu 28 Jul 2011 04:35:12 PM CST  -0.265656 seconds
Thu Jul 28 16:35:11 CST 2011
# 這樣就寫入囉~所以軟體時鐘與硬體時鐘就同步啦!很簡單吧!

ntpdate

/usr/sbin/ntpdate. Ref: time servers.

ntpdate ntp.api.bz

# US government.
time.nist.gov
time.windows.com

# Fudan
ntp.fudan.edu.cn
# Shanghai.
ntp.api.bz

# Crontab to update system and BIOS time.
10 5 * * * root (/usr/sbin/ntpdate tock.stdtime.gov.tw && /sbin/hwclock -w) &> /dev/null

ntpd

Command packages

aspell

GNU Aspell is a Free and Open Source spell checker designed to eventually replace Ispell. aspell.net.

binutils

The GNU Binary Utilities, or binutils, are a set of programming tools for creating and managing binary programs, object files, libraries, profile data, and assembly source code.

It includes: ld, gprof, ar, size, etc. wikipedia: GNU binutils.

icu

ICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications. site.icu-project.org.


apk add icu-dev

iproutes2

Included commands: "ip" controls IPv4 and IPv6 configuration. "tc" stands for traffic control.

wiki.linuxFoundation.org: iproute2.

libffi

libffi is a foreign function interface library. libffi is most often used as a bridging technology between compiled and interpreted language implementations. Notable users include Python. wikipedia: libffi.

net-tools

Included commands: arp(8), hostname(1), ifconfig(8), ipmaddr, iptunnel, mii-tool(8), nameif(8), netstat(8), plipconfig(8), rarp(8), route(8) and slattach(8).

NOTE: most net-tools programs are obsolete now, e.g. by "iproutes2"'s ip command.

program	obsoleted by
arp			ip neigh
ifconfig			ip addr
ipmaddr			ip maddr
iptunnel			ip tunnel
route			ip route
nameif			ifrename
mii-tool			ethtool

wiki.linuxFoundation.org: net-tools.

Plymouth

ArchWiki. Plymouth is a project from Fedora providing a flicker-free graphical boot process. It relies on kernel mode setting (KMS) to set the native resolution of the display as early as possible, then provides an eye-candy splash screen leading all the way up to the login manager.

Plymouth primarily uses KMS (Kernel Mode Setting) to display graphics. If you can't use KMS (e.g. because you are using a proprietary driver) you will need to use framebuffer instead.

Graphics card

Nvidia


lspci -k | grep -A 2 -E "(VGA|3D)"
nvidia-xconfig

# The GUI configure panel:
nvidia-settings

wcfNote: after installing nvidia driver and reboot, the DISPLAY becomes :1 instead of :0.


echo $DISPLAY
# :1

GPU acceleration

XvMC

X-Video Motion Compensation (XvMC) is an extension for the X.Org Server. The XvMC API allows video programs to offload portions of the video decoding process to the GPU video-hardware.

XvMC is obsoleted by VA-API and VDPAU nowadays, which have better support for recent GPUs.

VDPAU

Video Decode and Presentation API for Unix.

VA-API

Video Acceleration API.

CUPS

Short for Common Unix Printing System. Wikipedia.

Concepts, Jargons

Askubuntu. ACPI (Advanced Configuration and Power Interface). APIC (Advanced Programmable Interrupt Controller).

DKMS

Dynamic Kernel Module Support (DKMS) is a program/framework that enables generating Linux kernel modules whose sources generally reside outside the kernel source tree. The concept is to have DKMS modules automatically rebuilt when a new kernel is installed.

The negative effect of using DKMS is that DKMS breaks the Pacman database. The problem is that the resulting modules do not belong to the package anymore, so Pacman cannot track them.


dkms status

# Rebuild all modules for the currently running kernel:
dkms autoinstall
# or for a specific kernel:
dkms autoinstall -k 3.16.4-1-ARCH

# To build a specific module for the currently running kernel:
dkms install -m nvidia -v 334.21
# or simply:
dkms install nvidia/334.21

dkms remove -m nvidia -v 331.49 --all

AWS

修改 /etc/yum.repos.d/epel.repo。在标记了 [epel] 的部分下,将 enabled=0 改为 enabled=1。
要临时启用 EPEL 6 存储库,请使用 yum 命令行选项 --enablerepo=epel。

The Amazon AWS allows only one key-pair for a user to login, and allows no password-authentication. And the default username is ec2-user.

Distribution

Tools for packaging self-contained apps

CDE

Youtube talk by Philip Guo. CDE website. Wikipedia: denpendency hell.

ABSTRACT

It can be painfully difficult to take software that runs on one person's machine and get it to run on another machine. Online forums and mailing lists are filled with discussions of users' troubles with compiling, installing, and configuring software and their myriad of dependencies. To eliminate this dependency problem, we created a tool called CDE that uses system call interposition to monitor the execution of x86-Linux programs and package up the Code, Data, and Environment required to run them on other x86-Linux machines, without any installation or configuration.

CDE is easy to use: Simply prepend any Linux command (or series of commands) with 'cde', and CDE will execute that command, monitor its actions using ptrace, and copy all files it accesses (e.g., executables, libraries, plug-ins, scripts, configuration/data files) into a self-contained package. Now you can transfer that package to another Linux machine and run that exact same command without installing anything. In short, if you can run a set of Linux commands on your x86 machine, then CDE enables others to run it on theirs.

People in both academia and industry have used CDE to distribute portable software, demo research prototypes, make their scientific experiments reproducible, run software natively on older Linux distros, and quickly deploy experiments to compute clusters.

CDE is free and open-source, available here:
http://www.stanford.edu/~pgbovine/cde...

Speaker Info:

Philip Guo. Philip is a 5th-year Ph.D. student in the Computer Science Department at Stanford University. His research interests are in software reliability, programming languages, and operating systems.

Manual packaging

Stackoverflow: how to make unix binary self contained. The solution most commercial products use, as far as I can tell, is to make their "application" a shell script that sets LD_LIBRARY_PATH and then runs the actual executable. Something along these lines:

#!/bin/sh
here=`dirname "$0"`
export LD_LIBRARY_PATH="$here"/lib
exec "$here"/bin/my_app "[email protected]"
Then you just dump a copy of all the relevant .so files under lib/, put your executable under bin/, put the script in ., and ship the whole tree.
(To be production-worthy, properly prepend "$here"/lib to LD_LIBRARY_PATH if it is non-empty, etc.)

Other

http://programmers.stackexchange.com/questions/83948/how-to-distribute-our-software-on-linux-without-shipping-source-code. To distribute your commercial application in Linux,

About init

Reference.

SysV

Short for System V.

xinetd

rc.local

The info could be read at /etc/inittab.

# Default runlevel. The runlevels used by RHS are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)

To check and switch runlevel:

# See current runlevel.
who -r
# Switch to single user mode.
init 1

chkconfig

[--list] [--type type][name]
--add name
--del name
--override name
[--level levels] [--type type] name on|of|reset|resetpriorities

Auto start services in CentOS.

# Find out the name of service's script from /etc/init.d/ directory e.g. mysqld or httpd, and add it to chkconfig
sudo /sbin/chkconfig --add mysqld
# Make sure it is in the chkconfig.
sudo /sbin/chkconfig --list mysqld
# Set it to autostart
sudo /sbin/chkconfig mysqld on
# Stop it from auto starting on boot
sudo /sbin/chkconfig mysqld off

chkconfig file

See mysqld in /etc/init.d.

action is a function defined on /etc/rc.d/init.d/functions Reference.

Run service as non-root on CentOS

Ref. /etc/init.d/functions provides daemon, which is suitable for this task:

if [[ "$USER" == "my_user" ]]
then
	daemon my_cmd &>/dev/null &
else
	daemon --user=my_user my_cmd &>/dev/null &
fi

/var/lock/subsys/

Reference.

It is not always mandatory to create lock file, the services can be started and stopped without it. But it can create problem during shutdown and RUNLEVEL switch.Ref.

chroot

mkfifo

Reference. mkfifo is used to create a pipe file.

Synaptics, touchpad

LinuxQuestions.

Linux FAQ

Storing large number of small files

Remove symbolic directory

StackOverflow.

# This works. Need to tell it to delete a file, not delete a directory.
rm foo
# WARNING: This is dangerous. It will delete all files in it.
rm foo/

Too many open files

CNBlogs.


# list the limit for open files.
ulimit -a | grep open
# Or, directly
ulimit -n
# Change it. NOTE: It will be restored at next login.
ulimit -n 4096

# /etc/security/limits.conf
# Setting limit for every user.
# - means both "hard" and "soft".
* - nofile 1006154

# hard limit is the upper bound for soft limit.
# soft limit is the effective limit.
# hard limit could be decreased by normal user.
# only root could increase hard limit.

# Add in /etc/pam.d/login: session required /lib/security/pam_limits.so

# Check for the setting.
cat /proc/sys/fs/file-max
# Change it.
echo 2048 > /proc/sys/fs/file-max

# Also in /etc/sysctl.conf, add: fs.file-max = 8192
# execute this to make the change take effect
sysctl -p

Find parent process ID

SuperUser.


pstree -p pid/username

# option for long output.
# Output format is: F	UID	PID	PPID PRI	NI ...
ps l

How to create symbolic links to all files in a directory

Superuser. From man ln; the command above uses the 3rd form:

ln [OPTION]... [-T] TARGET LINK_NAME   (1st form)
ln [OPTION]... TARGET                  (2nd form)
ln [OPTION]... TARGET... DIRECTORY     (3rd form)
ln [OPTION]... -t DIRECTORY TARGET...  (4th form)

In the 1st form, create a link to TARGET with the name LINK_NAME.
In the 2nd form, create a link to TARGET in the current directory.
In the 3rd and 4th forms, create links to each TARGET in DIRECTORY.

Autoconf

error: possibly undefined macro: AM_INIT_AUTOMAKE. Stackexchange.

Poweroff issue

koolSolution.

There are some machines on which Linux will freeze, after a restart or a shutdown command is issued, right at the very end of the process forcing you to do a hard reset - press the reset/power button the system or plug the cord which is not good.
There are many reasons why this happens - some times it is an BIOS issue or sometimes it's just that your system has a different kind of hardware setup, for example no keyboard controller, and Linux (the kernel) does not understand how to tackle that situation. Most of the time if it is a BIOS issue it is not very easy to ask your system vendor to give you an immediate BIOS fix that will take care of the issue and hence you have to rely on some kernel parameters that you need to pass to fix the hang/freeze issue.

reboot=?, try in the following order:

For example,


reboot=a,b,k,f      # for reboot=acpi,bios,kbd,force

# Check for boot info
cat /proc/cmdline

Make interactive command in background

Ref.


# sudo -i [CMD]: simulate initial login.
# If a command is specified, it is passed to the shell for execution via the shell's -c option.
# If no command is specified, an interactive shell is executed.
sudo -i && apt-get update -y &

Kill zombie process

Ref. "A zombie is already dead, so you cannot kill it. To clean up a zombie, it must be waited on by its parent, so killing the parent should work to eliminate the zombie. (After the parent dies, the zombie will be inherited by init, which will wait on it and clear its entry in the process table.) If your daemon is spwaning children that become zombies, you have a bug. Your daemon should notice when its children die and wait on them to determine their exit status."


ps aux | grep -w Z   # returns the zombies pid
ps o ppid {returned pid from previous command}   # returns the parent
kill -15 {the parent id from previous command}

Why pid files in Linux Today

SuperUser: why does apache write its own process id in httpd.pid.

Yes, a process can be found using ps -ef, or by examining /proc directly.
However, this is a somewhat unreliable method – there can be several Apache processes running at the same time: for example, mpm-prefork, or multiple independent Apache configurations.

stackExchange.com: why do daemons store their pid process id in a file.

/run is generally a tmpfs, which means that it's not actually stored on disks but in memory.
Thus, it automatically gets discarded at shutdown and recreated at start up.

Find owner of a file

# Show everything at once
find . -printf "%U %f\n"

# Per file specs. FreeBSD version:
# Show owner name of cwd.
stat -f "%Su" .
# Show owner uid of cwd.
stat -f "%u" .

# GNU version
# uid
stat -c "%u" .
# User name.
stat -c "%U" .

ls -l | awk '{print $3}'
ls -l | grep filename | awk '{print $3}'

Get encoding of file

StackExchange: how can I test the encoding of a text file.


file -i file*
# file1: text/plain; charset=utf-8
# file2: text/plain; charset=iso-8859-1

sudo can't find command cd?

Ref. Precisely because "cd" is a shell builtin and not a binary, and sudo is not bash (nor even a shell). So sudo can't find any "cd" command. The solution is:


sudo su && cd /root

Change hostname on CentOS

Ref. There are several ways:

ifconfig on CentOS

/etc/sysconfig/network-scripts/ifcfg-ethX

GATEWAY=192.168.1.1
DNS1=192.168.1.1
DOMAIN=192.168.1.1
sudo service network restart

Jargons

PXE

Ref. The Preboot eXecution Environment (PXE) specification describes a standardized client-server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industry-standard network protocols such as DHCP and TFTP.

Fonts

Fontconfig

ArchWiki: fontconfig.

Fontconfig is a library designed to provide a list of available fonts to applications, and also for configuration for how fonts get rendered. It uses the FreeType library freetype2 to render the fonts, based on the configuration.

X uses its own methods of font selection and display.


fc-cache -vf

fc-list
fc-list | grep "Courier New"

# List all chinese fonts
fc-list :lang=zh

# Lists the filename and spacing value for each font face.
fc-list : (family | style | file | spacing)

# See what are in effect, e.g. hinting, hintstyle, autohint, etc.
fc-match --verbose

Configuration in Fontconfig

Or simply use gnome-tweak-tool to tweak on anti-aliasing and hinting.


# Activate good presets to user's profile:
mkdir -p ~/.config/fontconfig/conf.d/
cd /etc/fonts/conf.avail/
ln -s ${PWD}/10-sub-pixel-rgb.conf ${PWD}/11-lcdfilter-default.conf ~/.config/fontconfig/conf.d/

It is recommended in ArchWiki: Infinality as follows:


Xft.antialias: 1
Xft.autohint: 0
Xft.dpi: 96
Xft.hinting: 1
Xft.hintstyle: hintfull
Xft.lcdfilter: lcddefault
Xft.rgba: rgb

In Arch, Hinting and Anti-aliasing is on by default.

KaslNetwork.com provides a fontconfig conf file example. However, the previous presets are preferred.


<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
   <match target="font" >
     <edit mode="assign" name="rgba" >
       <const>rgb</const>
     </edit>
   </match>
   <match target="font" >
     <edit mode="assign" name="hinting" >
       <bool>true</bool>
     </edit>
   </match>
   <match target="font" >
     <edit mode="assign" name="hintstyle" >
       <const>hintslight</const>
     </edit>
   </match>
   <match target="font" >
     <edit mode="assign" name="antialias" >
       <bool>true</bool>
     </edit>
   </match>
   <match target="font">
     <edit mode="assign" name="lcdfilter">
       <const>lcddefault</const>
     </edit>
   </match>
</fontconfig>

FreeType

Wikipedia: FreeType.

X

The following is related to X only:


# Check the list of Xorg's known font paths
xset q

# Add font paths in ~/.xinitrc
xset +fp /usr/share/fonts/local/           # Prepend a custom font path to Xorg's list of known font paths
xset -fp /usr/share/fonts/sucky_fonts/     # Remove the specified font path from Xorg's list of known font paths

# See current DPI
xdpyinfo | grep dots

Concepts

Hinting

Wikipedia: Font Hinting.

Font hinting (also known as instructing) is the use of mathematical instructions to adjust the display of an outline font so that it lines up with a rasterized grid.

At low screen resolutions, hinting is critical for producing clear, legible text. It can be accompanied by antialiasing and (on liquid crystal displays) subpixel rendering for further clarity.

Fonts will line up correctly without hinting when displays have around 300 DPI.

Byte-Code Interpreter (BCI)

Using BCI hinting, instructions in TrueType fonts are rendered according to FreeTypes's interpreter. BCI hinting works well with fonts with good hinting instructions.

Autohinter

The Autohinter attempts to do automatic hinting and disregards any existing hinting information.

Originally it was the default because TrueType2 fonts were patent-protected.

It will be strongly sub-optimal for fonts with good hinting information. Generally common fonts are of the later kind so autohinter will not be useful.

The auto-hinter uses sophisticated methods for font rendering, but often makes bold fonts too wide.

Hintstyle

Hintstyle is the amount of font reshaping done to line up to the grid. Hinting values are: hintnone, hintslight, hintmedium, and hintfull.

hintslight will make the font more fuzzy to line up to the grid but will be better in retaining font shape.

hintfull will be a crisp font that aligns well to the pixel grid but will lose a greater amount of font shape.

Subpixel rendering

Monitors are either: RGB (most common), BGR, V-RGB (vertical), or V-BGR. A monitor test can be found here.

Most monitors manufactured today use the Red, Green, Blue (RGB) specification.

Subpixel rendering effectively triples the horizontal (or vertical) resolution for fonts by making use of subpixels. The default autohinter and subpixel rendering are not designed to work together, hence you will want to enable the subpixel autohinter by Infinality.

See the following "LCD filter" together.

LCD filter

When using subpixel rendering, you should enable the LCD filter, which is designed to reduce colour fringing.

FAQ

Show IP


curl https://4.ifcfg.me

Show memory cost of a process

ps gives the amount of memory pages allocated by a process.

Or use a profile:


valgrind --tool-massif app appArgs