Skip to main content

WiFi

find wifi device

$ /sbin/iw dev
phy#0
    Interface wls3
        ifindex 3
        wdev 0x1
        addr xx:xx:xx:xx:xx:xx
        ssid xxxxxxx
        type managed
        channel 36 (5180 MHz), width: 20 MHz (no HT), center1: 5180 MHz
        txpower 15.00 dBm

scan for available N/W

$ /sbin/iwlist scan

for WPA2

install related packages

$ sudo apt install wpasupplicant

generate psk for a ssid

$ wpa_passphrase myssid
# reading passphrase from stdin
mypassphrase < input from stdin
network={
    ssid="myssid"
    #psk="mypassphrase"
    psk=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
}

add these lines to /etc/network/interface

auto wls3
iface wls3 inet dhcp
  wpa-ap-scan 1
  wpa-scan-ssid 1
  wpa-ssid myssid
  wpa-psk xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

change permission /etc/network/interface

$ chmod 600 /etc/network/interface

mpd

mpd

Music Player Daemon. server-client architecture.

mpd.conf

You can specify audio output type and device in /etc/mpd.conf. Various types are available including http streaming. This is example for 2nd alsa device.

audio_output {
    type            "alsa"
    name            "My ALSA Device"
    device          "hw:1,0"
}

mpc

command line interface of mpd

option

-h <host> # specify server

subcommands

play
pause
next
prev

consume [on|off]

ls <directory> # lists all files in directory
listall <file> # lists all files
add <file> # add file to queue
clear # empty the queue

search <type> <query> # for example: $ mpc search filename ENYA
find <type> <query> # similar to search. but match exactly
list <type> # shows a list of all tags of type
update <path> # scans for updated files in the path

lsplaylists # lists available playlists
playlist <playlist name> # lists all songs in playlist
rm <playlist> # remove playlist
save <playlist> # save playlist

load <playlist> # load <playlist> in the queue

We can load m3u or cue files in a music directory as playlist. The file path in a playlist should be relative path from the playlist. M3U can be extended M3U.

mpc load <subdir in music dir>/<any subdir>/playlistfile ; mpc playlist

MPDroid

client for android

MaximumMPD

client for iOS

KODI Add-on:MPD Client

Add-on of KODI formerly known as XBMC. If HDMI-CEC is enabled on your monitor device, you can controll MPD with your remote controller.

vlc

VLC is opensource and crossplatform multimedia player. cvlc is command line version of vlc.

VLC(or cvlc) can stream via http. See Streaming HowTo/Command Line Examples or Streaming HowTo/Advanced Streaming Using the Command Line for detail

cvlc
--sout '#standard{access=http,mux=mp3,dst=192.168.x.x:xxxx}' # http stream. mux=dummy or raw will be ok
vlc://quit

http interface

VLC(or cvlc) could be controlled with http request. See VLC HTTP requests for detail

start cvlc with http interface

cvlc
-I HTTP # start http interface
--http-password <password> # specify http password
--aout alsa --alsa-audio-device plughw:2,0 # specify audio output device
vlc://quit

You can control vlc with not only web browser but also command line tool

curl
-I # fetche the headers only
--user ":<mypassword>" # specify http passowrd (user name is null)
http://127.0.0.1:8080/requests/status.xml 
?command=pl_play
?command=pl_pause
?command=volume&val=+<int>
?command=volume&val=-<int>

postfix

alias

$ postconf -n | grep alias
$ ls -l /etc/aliases*
$ postalias hash:/etc/aliases
$ newaliases

send mail or read mail of Maildir

$ apt install mailutils
$ echo "This is body message" | mail -s "My subject" pi
$ MAILDIR=$HOME/Maildir mail

apt-cacher-ng

make a dockerfile

$ cat Dockerfile
FROM debian:buster

RUN apt-get update \
&& apt-get install -y --no-install-recommends apt-cacher-ng \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

VOLUME ["/var/cache/apt-cacher-ng"]
EXPOSE 3142

CMD chmod 777 /var/cache/apt-cacher-ng \
&& /etc/init.d/apt-cacher-ng start \
&& tail -f /var/log/apt-cacher-ng/*

build an image

$ sudo docker build -t apt-cacher-ng:buster . | tee build.log
$ sudo docker tag apt-cacher-ng:buster apt-cacher-ng:latest

run a container

$ sudo docker run --rm -d -p 3142:3142 -v /mnt/apt-cacher-ng:/var/cache/apt-cacher-ng apt-cacher-ng:latest

test the address and port

$ curl 192.168.xxx.xxx:3142

how to use the cache server

specify it in a config file

$ cat << END | sudo tee /etc/apt/apt.conf.d/01proxy
> Acquire::http::Proxy "http://192.168.xxx.xxx:3142/";
> END

specify it in command line

$ http_proxy=http://192.168.xxx.xxx:3142/ sudo -E apt-get install xxxx

or

$ sudo su -
# http_proxy=http://192.168.xxx.xxx:3142/ apt-get install xxxx

for docker build

$ sudo docker build --build-arg http_proxy=http://192.168.xxx.xxx:3142/ -t imagename:tagname . | tee build.log

tcpdump

tcpdump

    tcpdump 
     -w <output filename>
     -r <input filename>
     -i <interface>
     -c <packet counts>

     -n   # don't convert address and port to names
     -e   # show link level header 
     -v   # verbose output
     -xx  # print the data of each packets with link level header in hex
     -XX  # print the data of each packets with link level header in hex and ascii
     -ttt # print a delta between current and previous line

     arp
     icmp
     port <port number>
     host <ip address>

wireshark

    wireshark 
     -r <input filename>
     -R "read filter"

To know detail of read filter, see man page of wireshark-filter

ecs

cluster

$ aws ecs list-clusters 
$ aws ecs describe-clusters --clusters <clusterArn>

$ aws ecs create-cluster --cluster-name <cluster-name> --tags '[{"key": "Name","value": "test"}]'
$ aws ecs delete-cluster --cluster <clusterArn>

$ aws ecs delete-cluster --cluster <clusterArn>

task definition

$ aws ecs list-task-definitions
$ aws ecs describe-task-definition --task-definition <taskDefinitionArn>

$ jq . task-definition.json
{
  "family": "sample-fargate",
  "networkMode": "awsvpc",
  "containerDefinitions": [
    {
      "name": "fargate-app",
      "image": "busybox",
      "essential": true,
      "command": [
        "sleep",
        "360"
      ]
    }
  ],
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "cpu": "256",
  "memory": "512"
}
$ aws ecs register-task-definition --cli-input-json file://task-definition.json

$ aws ecs deregister-task-definition --task-definition <taskDefinitionArn>

task

When you use fargate and retrieve docker image from docker hub, you have to use internet gateway or nat gateway in the vpc.

$ aws ecs list-tasks --cluster <clusterArn>
$ aws ecs describe-tasks --cluster <clusterArn> --tasks <taskArn>

$ jq . network-configuration.json
{
  "awsvpcConfiguration": {
    "subnets": [
      "<subnet>"
    ],
    "securityGroups": [
      "<securitygroup>"
    ],
    "assignPublicIp": "ENABLED"
  }
}
$ aws ecs run-task --task-definition <taskDefinitionArn> --cluster <clusterArn> --count 1 --launch-type FARGATE --network-configuration file://network-configuration.json

$ aws ecs stop-task --cluster <clusterArn> --task <taskArn>

tags

$ aws ecs list-tags-for-resource --resource-arn <resource-arn>
$ aws ecs tag-resource --resource-arn <resource-arn> --tags '[{"key": "Name","value": "test"}]'

service

$ aws ecs list-services --cluster <clusterArn>
$ aws ecs describe-services --cluster <clusterArn> --services <serviceArn>
$ aws ecs create-service --cluster <clusterArn> --service-name <serviceName> --task-definition <task-definition> --desired-count 1 --launch-type FARGATE --network-configuration file://network-configuration.json

$ aws ecs list-tasks --cluster <clusterArn>
$ aws ecs describe-tasks --cluster <clusterArn> --tasks <taskArn>

$ aws ecs update-service --cluster <clusterArn> --service <serviceArn> --desired-count 0
$ aws ecs delete-service --cluster <clusterArn> --service <serviceArn>

dynamodb

create and delete table

$ aws dynamodb list-tables
$ aws dynamodb describe-table --table-name testtable
$ aws dynamodb create-table --table-name testtable  \
 --attribute-definitions '[{"AttributeName":"Artist","AttributeType":"S"},{"AttributeName":"AlbumTitle","AttributeType":"S"}]' \
 --key-schema '[{"AttributeName":"Artist","KeyType":"HASH"},{"AttributeName":"AlbumTitle","KeyType":"RANGE"}]' \
 --provisioned-throughput '{"ReadCapacityUnits": 1,"WriteCapacityUnits": 1}' \
 --tags '[{"Key": "Name","Value": "test"}]'

$ aws dynamodb delete-table --table-name testtable

put item

$ jq '.' put-item.json
{
  "Artist": {
    "S": "The Beatles"
  },
  "AlbumTitle": {
    "S": "Please Please Me"
  },
  "songs": {
    "L": [
      {
        "S": "I Saw Her Standing There"
      },
      {
        "S": "Misery"
      }
    ]
  }
}
$ aws dynamodb put-item --table-name testtable --item file://put-item.json

get and delete item

$ aws dynamodb get-item --table-name testtable --key '{ "Artist": { "S": "The Beatles" },"AlbumTitle": { "S": "Please Please Me" } }'
$ aws dynamodb delete-item --table-name testtable --key '{ "Artist": { "S": "The Beatles" },"AlbumTitle": { "S": "Please Please Me" } }'

backup and restore database

create backup

$ aws dynamodb list-backups --table-name testtable
$ aws dynamodb create-backup --table-name testtable --backup-name testtablebackup

describe backup

$ aws dynamodb describe-backup --backup-arn $(aws dynamodb list-backups --table-name "testtable" --query 'max_by(BackupSummaries[?BackupName == `testtablebackup`], &BackupCreationDateTime).BackupArn' | jq -r .)

restore from newest backup

$ aws dynamodb delete-table --table-name testtable
$ aws dynamodb restore-table-from-backup --target-table-name testtable --backup-arn $(aws dynamodb list-backups --table-name "testtable" --query 'max_by(BackupSummaries[?BackupName == `testtablebackup`], &BackupCreationDateTime).BackupArn' | jq -r .)
$ aws dynamodb describe-table --table-name testtable --query 'Table.TableStatus'

remove oldest backup

$ aws dynamodb delete-backup --backup-arn $(aws dynamodb list-backups --table-name "testtable" --query 'max_by(BackupSummaries[?BackupName == `testtablebackup`], &BackupCreationDateTime).BackupArn' | jq -r .)

sample python script

put-item.py

#! /usr/bin/python3
import boto3
import json

tablename = 'testtable'
item = {
  "Artist": {
    "S": "The Beatles"
  },
  "AlbumTitle": {
    "S": "Please Please Me"
  },
  "songs": {
    "L": [
      {
        "S": "I Saw Her Standing There"
      },
      {
        "S": "Misery"
      }
    ]
  }
}

dynamo = boto3.client('dynamodb')
res = dynamo.put_item(TableName=tablename, Item=item)
print (json.dumps(res))

get-item.py

#! /usr/bin/python3
import boto3
import json

tablename = 'testtable'
key = {
  "Artist": { "S": "The Beatles" },
  "AlbumTitle": { "S": "Please Please Me" }
}

dynamo = boto3.client('dynamodb')
res = dynamo.get_item(TableName=tablename, Key=key)
print (json.dumps(res))

delete-item.py

#! /usr/bin/python3
import boto3
import json

tablename = 'testtable'
key = {
  "Artist": { "S": "The Beatles" },
  "AlbumTitle": { "S": "Please Please Me" }
}

dynamo = boto3.client('dynamodb')
res = dynamo.delete_item(TableName=tablename, Key=key)
print (json.dumps(res))
# print (json.dumps(res['ResponseMetadata']['HTTPStatusCode']))

cloudformation template

    TestDynamoDBTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: "TestDynamoDBTable"
        Tags:
          - Key: "Name"
            Value: "test"
        AttributeDefinitions:
          - AttributeName: "subject"
            AttributeType: "S"
          - AttributeName: "year"
            AttributeType: "N"
        KeySchema:
          - AttributeName: "subject"
            KeyType: "HASH"
          - AttributeName: "year"
            KeyType: "RANGE"
        BillingMode: "PROVISIONED"
        ProvisionedThroughput:
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1

cryptsetup

luks

install a package

$ sudo apt install cryptsetup

format

$ sudo cryptsetup luksFormat /dev/md0 
$ sudo cryptsetup luksDump /dev/md0

open

$ sudo cryptsetup open /dev/md0 cryptfs
$ sudo cryptsetup status cryptfs

open tcrypt device

$ sudo cryptsetup open --type tcrypt /dev/md0

format mount

$ sudo mkfs -t ext4 /dev/mapper/cryptfs 
$ sudo mount /dev/mapper/cryptfs /mnt
$ df -h /mnt

umount and close

$ sudo umount /mnt 
$ sudo cryptsetup close cryptfs

mdadm

software raid

install mdadm package

$ sudo apt-get install mdadm

make dummy files for test

$ dd if=/dev/zero of=file.img bs=2M count=0 seek=512
$ cp -p file.img file0.img
$ cp -p file.img file1.img
$ cp -p file.img file2.img
$ cp -p file.img file3.img
$ cp -p file.img file4.img
$ ls -lhs file*img

losetup

$ sudo losetup /dev/loop0 file0.img
$ sudo losetup /dev/loop1 file1.img
$ sudo losetup /dev/loop2 file2.img
$ sudo losetup /dev/loop3 file3.img
$ sudo losetup /dev/loop4 file4.img

raid0

$ sudo mdadm --create /dev/md0 -l raid0 -n 2 /dev/loop0 /dev/loop1
$ cat /proc/mdstat
$ sudo mdadm --detail /dev/md0

$ sudo mdadm --detail --scan
$ sudo mdadm --detail --scan > /etc/mdadm.conf

mkfs and mount

$ sudo mkfs -t ext4 /dev/md0
$ sudo mount /dev/md0 /mnt
$ df -h /mnt

stop and remove settings

$ sudo mdadm --stop /dev/md0
$ sudo mdadm --zero-superblock /dev/loop0
$ sudo mdadm --zero-superblock /dev/loop1
( $ sudo rm -i /etc/mdadm.conf )

raid1

$ sudo mdadm --create /dev/md0 -l raid1 -n 2 /dev/loop0 /dev/loop1

make fail

$ sudo mdadm --stop /dev/md0
$ sudo losetup -d /dev/loop1

recover

$ sudo mdadm --assemble --scan -v
$ sudo mdadm --examine /dev/loop0

$ sudo mdadm --add /dev/md0 /dev/loop2
$ sudo mdadm --detail --scan > /etc/mdadm.conf

add extra disk

$ sudo losetup /dev/loop1 file1.img
$ sudo mdadm --add /dev/md0 /dev/loop1

make fail

$ sudo mdadm --stop /dev/md0
$ sudo losetup -d /dev/loop1

recover

$ sudo mdadm --assemble --scan -v
$ sudo mdadm --examine /dev/loop0
$ sudo mdadm --grow /dev/md0 --raid-devices=2

when disk alert has come.

$ sudo mdadm --fail /dev/md0 /dev/loop1
$ sudo mdadm --remove /dev/md0 /dev/loop1
$ sudo mdadm --add /dev/md0 /dev/loop3

if md0 has extra disk, when disk alert has come, automatically rebuild

$ sudo losetup /dev/loop0 file0.img
$ sudo mdadm --add /dev/md0 /dev/loop0
$ sudo mdadm --fail /dev/md0 /dev/loop2
$ sudo mdadm --remove /dev/md0 /dev/loop2

raid5

/dev/loop3 is extra disk for spare

$ sudo mdadm --create /dev/md0 -l raid5 -n 3 /dev/loop0 /dev/loop1 /dev/loop2 -x 1 /dev/loop3

raid10

/dev/loop4 is extra disk for spare

$ sudo mdadm --create /dev/md0 -l raid10 -n 4 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 -x 1 /dev/loop4