Skip to main content

Posts for year 2020

CloudWatch Events

create and delete rule

$ aws events list-rules
$ aws events put-rule --name testrule --schedule-expression "rate(60 minutes)"  --state DISABLED
$ aws events enable-rule --name testrule
$ aws events disable-rule --name testrule
$ aws events delete-rule --name testrule
$ aws events describe-rule --name testrule

create and remove targets

$ aws events put-targets --rule testrule --targets '{"Input":"{\"interval\":60,\"rss\":\"https://status.aws.amazon.com/rss/ec2-ap-northeast-1.rss\",\"topicarn\":\"arn:aws:sns:ap-northeast-1:xxxxxxxxxxxx:mysnstopic\"}","Id":"1","Arn":"arn:aws:lambda:ap-northeast-1:xxxxxxxxxxxx:function:mylambdafunction"}'
$ aws events remove-targets --rule testrule --ids 1
$ aws events list-targets-by-rule --rule testrule

CloudFormation stack template sample

EventsRuleRssNotify:
  Type: 'AWS::Events::Rule'
  Properties:
    Description: 'rss notification'
    Name: rssnotifycf
    ScheduleExpression: 'rate(15 minutes)'
    State: ENABLED
    Targets:
      - Arn: !GetAtt LambdaRssNotification.Arn
        Id: "1"
        Input: !Ref INPUTJSON

kvm on raspberrypi

kvm On Raspberry Pi OS (64bit) beta test version

related packages

$ sudo apt install --no-install-recommends qemu-system-arm qemu-utils kpartx

boot alpine linux

get kernel and initra from the iso file

$ sudo mount -o loop alpine-standard-3.12.0-aarch64.iso /media/tmp
$ sudo cp /media/tmp/boot/initramfs-lts .
$ sudo cp /media/tmp/boot/vmlinuz-lts .
$ sudo umount /media/tmp

boot

$ sudo qemu-system-aarch64 -cpu host -enable-kvm -machine virt -nographic -m 512 \
> -kernel vmlinuz-lts -initrd initramfs-lts \
> -drive if=none,id=image,file=alpine-standard-3.12.0-aarch64.iso -device virtio-blk-device,drive=image \
> -netdev user,id=user0 -device virtio-net-device,netdev=user0 \
> -monitor telnet:localhost:10025,server,nowait

install debian10

download kernel and initrd file and create an empty virtual image file.

$ wget -O linux http://ftp.jp.debian.org/debian/dists/buster/main/installer-arm64/current/images/netboot/debian-installer/arm64/linux
$ wget -O initrd.gz http://ftp.jp.debian.org/debian/dists/buster/main/installer-arm64/current/images/netboot/debian-installer/arm64/initrd.gz
$ qemu-img create -f qcow2 hda.qcow2 16G

boot the installer with downloaded kernel and initrd file and install to the virtual image file.

$ sudo qemu-system-aarch64 -cpu host -enable-kvm -machine virt -m 512 \
> -kernel linux -initrd initrd.gz \
> -drive if=none,id=image,file=debian10.qcow2 -device virtio-blk-device,drive=image \
> -netdev user,id=user0 -device virtio-net-device,netdev=user0 \
> -monitor telnet:localhost:10025,server,nowait \
> -no-reboot -vnc :1

$ vncviewer xxx.xxx.xxx.xxx:5901

get kernel and initrd file from qcow2 file

$ lsmod | grep nbd
$ sudo modprobe nbd
$ sudo qemu-nbd -c /dev/nbd0 debian10.qcow2
$ sudo kpartx -av /dev/nbd0
$ sudo mount /dev/mapper/nbd0p1 /media/tmp

$ cp /media/tmp/initrd.img-4.19.0-11-arm64 .
$ cp /media/tmp/vmlinuz-4.19.0-11-arm64 .

$ sudo umount /media/tmp
$ sudo kpartx -dv /dev/nbd0
$ sudo qemu-nbd -d /dev/nbd0
$ sudo rmmod nbd

boot

$ sudo qemu-system-aarch64 -cpu host -enable-kvm -machine virt -m 512 \
> -kernel vmlinuz-4.19.0-11-arm64 -initrd initrd.img-4.19.0-11-arm64 \
> -append 'root=/dev/vda2' \
> -drive if=none,id=image,file=debian10.qcow2 -device virtio-blk-device,drive=image \
> -netdev user,id=user0 -device virtio-net-device,netdev=user0 \
> -monitor telnet:localhost:10025,server,nowait \
> -nographic

docker on raspberrypi

Install docker on Raspberry Pi OS (64bit) beta test version We can download the iso images from https://downloads.raspberrypi.org/raspios_arm64/images/

Also we can use torrent to download them

make a script to stop transmission when download finished (in this case, the script name is stop_torrent.sh)

sleep 10
kill $(ps -ef | grep transmission | grep -v grep | awk '{print $2}')

then start download

transmissIon-cli --download-dir (fullpath) --uplimit (kbps) --downlimit (kbps) --encryption-required --finish (fullpath)/stop_torrent.sh https://(fqdn)/(path)/(filename).torrent

How to install

$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh --dry-run
$ sudo sh get-docker.sh
$ systemctl is-active docker
$ systemctl is-enabled docker
$ sudo docker version

change the docker root directory where we store the images and containers.

$ sudo docker info | grep Root
$ sudo systemctl stop docker
$ sudo mv /var/lib/docker /path/to/
$ sudo ln -s /path/to/docker /var/lib/docker
$ sudo systemctl start docker
$ sudo docker run --rm hello-world

confirm the result

$ sudo docker run --rm -it alpine
/ #
/ # uptime
 03:17:44 up 1 day,  2:47,  load average: 0.02, 0.05, 0.01
/ #
/ # uname -a
Linux 4db177a44a14 5.4.42-v8+ #1319 SMP PREEMPT Wed May 20 14:18:56 BST 2020 aarch64 Linux
/ #
/ # free -m
              total        used        free      shared  buff/cache   available
Mem:           7816         315        6339           2        1161        7460
Swap:          8191           0        8191

install docker-compose

$ http_proxy=http://192.168.xxx.xxx:3142/ sudo -E apt install libffi-dev libssl-dev
$ sudo pip3 install docker-compose
$ pip3 show docker-compose
$ docker-compose version

commands of docker-compose

config # Validate and view the Compose file
ps # List containers
images # List images
build # (Re)Build services
create # Create services
up # Create and start containers
start # Start services
stop # Stop services
rm -f # Remove stopped containers
down # Stop and remove containers
version # Show version information
help # Show help messages

When update docker-compose.yml, no need to rebuild the image.

docker-compose up -d

When update Dockerfile or edit any source codes, need to rebuild the image.

docker-compose build
docker-compose up -d

vhd

Windows can mount vhd file as a virtual drive. We can use it for application install space or document storage area.

When we create a vhd file on a flash memory, the memory should be formatted with ntfs format.

parted /dev/sdb print
parted /dev/sdb rm 1
parted /dev/sdb mkpart primary ntfs 0% 100%
mkfs.ntfs -Q /dev/sdb1
mount /dev/sdb1 /mnt

At first a raw image file, make label, and make partition.

dd if=/dev/zero of=file.img bs=1M count=0 seek=59000
losetup -f
losetup /dev/loop1 file.img
losetup -a
parted /dev/loop1 print
parted /dev/loop1 mklabel msdos
parted /dev/loop1 mkpart primary ntfs 0% 100%
losetup -d /dev/loop1

Then format the partition in ntfs format.

kpartx -a -v file.img
mkfs.ntfs -Q /dev/loop1p1
kpartx -d -v file.img

The vhd file must not be sparse file. So after we convert it to vhd format, then we should convert it to normal file.

cp --sparse=never file.img file_new.img
qemu-img convert -O vpc -o subformat=fixed,force_size=on file.img output.vhd
cp -pi --sparse=never output.vhd.sparse file.vhd

diskpart

In windows os, we can handle partition and filesystem with sub commands of diskpart command.

format usb drive

list volume
select volume=2
format fs=ntfs quick

create a vhd file and attach it

list volume
create vdisk file="d:\image.vhd" maximum=1024
select vdisk file="d:\image.vhd"
attach vdisk
list vdisk

create partition in the vhd file

select disk 2
list disk
create partition primary

format in ntfs and assign drive letter

list volume
select volume=3
format fs=ntfs quick
assign letter=e:

detach the vhd file

select vdisk file="d:\image.vhd"
detach vdisk
list vdisk

next time, just attach it

select vdisk file="d:\image.vhd"
attach vdisk
list vdisk

nikola

Nikola is one of static site generator written in python3.

  • It supports some input formats include Markdown.
  • Theme is written in Mako or Jinja2. The user can use any existing theme or can create your own theme inherits from existing one.
  • The user can specify deployment procedure and run it.
  • Especially for Github pages, you can build the site, commit the changes, and push the output to github with one command.

As described in the footer of this page, I use it now.

This is a sample Dockerfile. The docker image will become about 353MB.

FROM alpine:latest

ARG version=8.1.3
ARG PIP_INDEX_URL
ARG PIP_TRUSTED_HOST

RUN apk --update --no-cache add py3-pip git bash openssh \
gcc musl-dev python3-dev libxml2-dev libxslt-dev libjpeg-turbo-dev \
&& rm -rf /var/cache/apk/* \
&& pip3 install nikola==${version} jinja2 ghp-import2 \
&& mkdir /tmp/nikola \
&& adduser -H -D docker

VOLUME ["/tmp/nikola"]
EXPOSE 80
USER "docker"
WORKDIR "/tmp/nikola"
CMD ["/bin/bash"]

The build command will be like shown below:

docker build --build-arg version=8.1.3 --build-arg PIP_TRUSTED_HOST=192.168.xxx.xxx --build-arg PIP_INDEX_URL=http://192.168.xxx.xxx:3141/root/pypi -t nikola:alpine -f Dockerfile.alpine .

backend and lock

create S3 bucket and DynamoDB table

At first make tf file to build S3 bucket for backend to store state file and DynamoDB table for lock control

resource "aws_s3_bucket" "terraform_state" {
  bucket = "mybucketname"
  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  # lifecycle {
  #   prevent_destroy = true
  # }

  tags = {
    Name = "terraform_backend"
  }
}

resource "aws_s3_bucket_public_access_block" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.bucket

  block_public_acls = true
  block_public_policy = true
  ignore_public_acls = true
  restrict_public_buckets = true
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name = "terraform_state_lock"
  read_capacity = 1
  write_capacity = 1
  hash_key = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}
variable "region" {
  default = "ap-northeast-1"
}

provider "aws" {
  region = var.region
  version = "~> 2.61.0"
}

terraform {
  required_version = ">= 0.12.26"
#  backend "s3" {
#    bucket = "mybucketname"
#    key    = "network/terraform.tfstate"
#    region = "ap-northeast-1"
#    dynamodb_table = "terraform_state_lock"
#  }
}

then initialize

terraform init 
terraform show

at last create bucket and table

terraform plan -out terraform.plan -no-color 
terraform apply "terraform.plan" -no-color 
terraform show

change backend to S3

at first edit tf file to enable S3 backend

terraform {
  required_version = ">= 0.12.26"
  backend "s3" {
    bucket = "mybucketname"
    key    = "network/terraform.tfstate"
    region = "ap-northeast-1"
    dynamodb_table = "terraform_state_lock"
  }
}

then initialize

terraform init -no-color 
aws s3api list-object-versions  --bucket mybucketname --prefix network/terraform.tfstate --query 'Versions[].{VersionId:VersionId, LastModified:LastModified}'

then you can use S3 backend

add tf file content to build aws resource

variable "cidr_block" {
  default = "10.0.0.0/16"
}

resource "aws_vpc" "terraform_test_vpc" {
  cidr_block           = var.cidr_block
  instance_tenancy     = "default"
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "terraform_test"
  }
}

plan and apply as usual

terraform plan -no-color -out terraform.plan 
terraform apply "terraform.plan" 
terraform show

aws s3api list-object-versions  --bucket mybucketname --prefix network/terraform.tfstate --query 'Versions[].{VersionId:VersionId, LastModified:LastModified}'

remove all reosources other than S3 bucket and DynamoDB table

before change backend from S3 to local again, remove all other resources.

at first remove all aws resources from tf file other than S3 bucket for backend and DynamoDB table for lock control. then plan and apply

terraform plan -no-color -out terraform.plan 
terraform apply "terraform.plan" 
terraform show

change backend from S3 to local

comment out or remove backend from tf file

    terraform {
      required_version = ">= 0.12.26"
    #  backend "s3" {
    #    bucket = "mybucketname"
    #    key    = "network/terraform.tfstate"
    #    region = "ap-northeast-1"
    #    dynamodb_table = "terraform_state_lock"
    #  }
    }

then initialize.

terraform init -no-color 
ls -l terraform.tfstate

remove S3 tables and DynamoDB table

before remove them, make sure S3 bucket empty

aws s3api list-object-versions  --bucket mybucketname --prefix network/terraform.tfstate --query 'Versions[].{VersionId:VersionId, LastModified:LastModified}'
delete_objects=$(aws s3api list-object-versions --bucket mybucketname --prefix network/terraform.tfstate \
--query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')
aws s3api delete-objects --bucket mybucketname --delete "${delete_objects}"

terraform destroy 
terraform show

Sample yaml file for stack of CloudFormation to build backend S3 bucket and DynamoDB table

AWSTemplateFormatVersion: '2010-09-09'
Parameters:
  BucketName:
    Type: String
  TableName:
    Type: String
Resources:
  BackendBucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Properties:
      BucketName: !Ref BucketName
      PublicAccessBlockConfiguration:
        BlockPublicAcls: True
        BlockPublicPolicy: True
        IgnorePublicAcls: True
        RestrictPublicBuckets: True
      BucketEncryption:
        ServerSideEncryptionConfiguration:
        - ServerSideEncryptionByDefault:
            SSEAlgorithm: AES256
      VersioningConfiguration:
        Status: "Enabled"
      Tags:
        - "Key": "Name"
          "Value": "test"
  LockctrlTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: !Ref TableName
      AttributeDefinitions:
        - AttributeName: "LockID"
          AttributeType: "S"
      KeySchema:
        - AttributeName: "LockID"
          KeyType: "HASH"
      BillingMode: "PROVISIONED"
      ProvisionedThroughput:
        ReadCapacityUnits: 1
        WriteCapacityUnits: 1
      Tags:
        - Key: "Name"
          Value: "test"

github

adding ssh public key

You can use github without ssh key. But when you add ssh key and you use ssh private key without passphrase, it is convinient for scripting.

Login and follow menu like below: Settings > SSH and GPG keys > New SSH key Then register name and content of your ssh public key.

edit ssh config file

cat << END | tee -a .ssh/config
Host github.com
    IdentityFile    ~/.ssh/id_rsa.mykey
    User            git
Host *.github.com
    IdentityFile    ~/.ssh/id_rsa.mykey
    User            git
END

test ssh connection. add -v to see debug messages

ssh -T github.com

When you've already work with any github repository, if you want to use git protocol instad of https, you can use it.

git config --local url.git@github.com:.insteadOf https://github.com/
git config --local url.git@gist.github.com:.insteadOf https://gist.github.com/
git config --local --list
git remote -v

After Create a new repository

when you don't have ever created a git repository

git init
git add README.md
git commit -m "first commit"

after that add remote to the repository, and push it.

git remote add origin git@github.com:username/myrepositoryname.git
git remote -v
git push -u origin master

To .gitignore file, add list of files which you don't want to add to the git repository

cat << END | tee -a .gitignore
id_rsa
END

initialize history of repository

rm -rf .git
git init
git add .
git commit -a -m "<commit message>"
git remote add origin <url>
git push -u origin master -f

screen and tmux

command screen tmux
list -ls ls
with name -S name new -s name
attach -r [title] a [-t title]
prefix Ctrl+a Ctrl+b
new prefix+c prefix+c
switch prefix+num prerix+num
list screen prefix+" prerix+w
copy mode prefix+esc prefix+[

screen

it can connect to serial port. default baud is 9600

screen /dev/ttyS0 [baud rate]

it can create a new window which executes a specific program

screen watch -n 5 ntpq -pn

chrony

chrony is an implementation of Network Time Protocol

install

apt install chrony

sample config specify ntp server at server or ntp server pool for pool

$ grep -E -v "^#|^$" /etc/chrony/chrony.conf
server 192.168.xxx.xxx iburst minpoll 6 maxpoll 10
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
logdir /var/log/chrony
maxupdateskew 100.0
rtcsync
makestep 1 3

reload configuration

systemctl status chronyd
journalctl -u chrony -f
systemctl force-reload chrony

show system track performance

chronyc tracking

show current time sources

chronyc sources

show information about drift rate and offset estimation process

chronyc sourcestats

show the last valid measurement and other information

chronyc ntpdata

server

For server settings, at least add a allow line. cmdallow and bindcmdaddress lines are optional which is for monitoring access

$ grep -E -v "^#|^$" /etc/chrony/chrony.conf
server 192.168.xxx.xxx iburst minpoll 6 maxpoll 10
keyfile /etc/chrony/chrony.keys
driftfile /var/lib/chrony/chrony.drift
logdir /var/log/chrony
maxupdateskew 100.0
rtcsync
makestep 1 3
allow 192.168.xxx.0/24
cmdallow 192.168.xxx.0/24
bindcmdaddress 127.0.0.1
bindcmdaddress 192.168.xxx.xxx

show list of clients

chronyc clients

specify a remote host to which chronyd is to be connected (using udp/323) default is localhost

cronyc -h 192.168.xxx.xxx

ntpd

install

apt install ntp

sample config specify ntp server at server

$ grep -E -v "^#|^$" /etc/ntp.conf
driftfile /var/lib/ntp/ntp.drift
leapfile /usr/share/zoneinfo/leap-seconds.list
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
server 192.168.xxx.xxx iburst
restrict -4 default ignore
restrict -6 default ignore
restrict 127.0.0.1
restrict ::1

reload configuration

systemctl status ntp
journalctl -u ntp -f
systemctl force-reload ntp

confirm commands

ntpq -pn
ntpq -c readlist

server

For server settings, at lease a restrict <client address> line to allow ntp clients access. If you don't add noquery, you allow the client to query your ntpd status.

$ grep -E -v "^#|^$" /etc/ntp.conf
driftfile /var/lib/ntp/ntp.drift
leapfile /usr/share/zoneinfo/leap-seconds.list
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
server 192.168.xxx.xxx iburst
restrict -4 default ignore
restrict -6 default ignore
restrict 127.0.0.1
restrict ::1
restrict 192.168.xxx.xxx mask 255.255.255.0 nomodify notrap nopeer noquery

confirm commands

ntpq -pn <address>
ntpq -c readlist <address>