2021.08.11

MacBook Pro 2016, macOS Catalina, version 10.5.7

Install

  • Install packages
      brew install basictex
      sudo tlmgr update --self --all
        
      # Not sure if these 3 are needed
      sudo tlmgr paper a4
      sudo tlmgr install collection-langjapanese
      sudo tlmgr install xecjk
        
      # Fix `! LaTeX Error: File `ctexhook.sty' not found.`
      sudo tlmgr install ctex
    
  • To fix ! Package fontspec Error: The font "IPAexGothic" cannot be found., download fonts from here, unzip, and double click the .ttf file to open a window, click Install font
  • There must be a simpler way to install… :confused: It’s never easy to setup latex, especially for writing Japanese/Chinese/Korean

Compile

I use a makefile so that the files can be ordered as I like. Note that Makefile indentation usually requires TAB.

files = \
	index.md\
	$(wildcard files*.md)

output/manual.pdf: $(files)
	pandoc --pdf-engine=xelatex -V CJKmainfont=IPAexGothic
	  -V colorlinks=true \
	  --output=$@ $(files)

In terminal, type make

2021.08.09
  1. Took video with iPhone.
  2. Copy video to MacBook using AirDrop.
  3. Edit video sequence, change speed, transition. Subtitling in iMovie sucks, so I only do the video transitions here, which is what it does best. Cropping and rearranging snippets of videos is very intuitive.
  4. Export video using File -> Share -> File
  5. Use Aegisub to create a subtitle file. I’ve known this program since a long time ago and it’s amazing that it still works with Mac OS Catalina. The audio spectrum display is amazing as well. It’s really stable. I’m very impressed.
  6. The audio has a noise, so I need to edit the audio. We can use ffmpeg to extract just the audio from the video, use m4a, saving it with aac extension will cause the audio to have incorrect length. I specified the bitrate and sampling frequency.
    ffmpeg -i video.mp4 -vn -b:a 102k -ar 48000 -acodec copy video.m4a
    
  7. Open in Audacity (To open m4a files, make sure to download and install the ffmpeg library from Audacity guide, it requires a specific version and it says it won’t get conflict with ffmpeg installed using brew), check the high pitch frequency using Anaylze -> Plot Spectrum
  8. Select All (Cmd + A), Apply Effect -> Low Pass Filter with desired frequency and 48db for Roll-off to remove the high pitched noise
  9. Replace old video’s audio with the edited audio without reencoding the video (Really fast!)
    ffmpeg -i video.mp4 -i audio.m4a -c:v copy -map 0:v:0 -map 1:a:0 new_video.mp4
    
  10. Hard code the subtitle into the video. Open Handbrake, load the video. In the Subtitles tab, at the dropdown for Tracks, click Add External Subtitles Track... and select the previously created subtitle (works with Aegisub’s .ass file)
  11. Start encoding!
2021.07.28

Just recently (duh!) I came to realize that Gitlab Pages is actually a CI/CD process. The automation of everything is so mind blowing!

I am looking for an easy way to use Gitlab Pages as internal documentation. Not sure what to use, Sphinx or MkDocs, or even a custom one. It’s be nice if we can customize the documentation build (internal / customer A / customer B, etc). MkDocs sounds simple enough to use and even people who are not proficient with programming can write in MarkDown format. Gitlab view itself is actually good enough for internal documentation since it’s rendered automatically and intenal hyperlinking works.

And while looking at the documentation for Gitlab Pages I came accross that it’s actually using a Docker image. I’ve been wanting to learn more about Docker. It’s cool that there is DockerHub where lots of images are available to be used from Gitlab CI/CD! And it seems like we can create public images for free! This small tutorial also helps me understand more about CI/CD and Docker. Now I feel like I know more about it. But it seems to be so cumbersome to set up a custom one.. It feels like setting up dot emacs file. It gets interesting, frustrating, a rabbit hole.

Relating on that, I’ve been looking at the quota being used. I saw my quota is 400, but I only see minutes being consumed for one repo, a private one. Apparently Gitlab is limitting public repo’s CI minutes to be 50,000 for free plan (Adjusted by 0.008 to the 400 minutes quota), not unlimited anymore (Not sure why our group’s quota is still 2000 minutes, though). I was reading the issue where they’re discussing how to implement the limitation. The limitation is quite recent, just 2 weeks ago.

Working remotely for Gitlab sounds so nice, but I think every company has the problem of business policies vs engineering solutions. :expressionless: Regardless, working remotely for an open source project is my dream.

I was wondering if I were to build ROS packages, that’d take a lot of time, and wonder if there are other services like Travis maybe. Then I came across an issue at curl that Travis is also limitting the minutes for public projects that they’re replacing it to another service. Wow. I think I should learn on how to setup external runners in AWS.

Not a productive morning spending most of it reading through these issues, but it was an interesting insight.

2021.06.17

Get the list of video format, size, and framerate that your camera supports

v4l2-ctl --list-formats-ext

Record the stream using the camera’s native h264 encoding, 1920x1080 resolution, save the video format as is, record for 1 minute, name the file with current date time.

ffmpeg -f v4l2 -input_format h264 -s 1920x1080 -i /dev/video0 -c:v copy -t 00:01:00 $(date +"%Y%m%d%H%M%S").mp4

It should say out the the input stream is

Input #0, video4linux2,v4l2, from '/dev/video0':
  Duration: N/A, start: 9005.983108, bitrate: N/A
    Stream #0:0: Video: h264 (Constrained Baseline), yuvj420p(pc, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 1000k tbn, 60 tbc

And the stream mapping is

Stream mapping:
  Stream #0:0 -> #0:0 (copy)

Camera controls

(Resetted after reboot I think)

v4l2-ctl -d /dev/video3 -l
v4l2-ctl -d /dev/video3 --set-ctrl zoom_absolute=125

To turn off camera’s LED, where video3 is the same index as the /dev/video3

uvcdynctrl -d video3 -s 'LED1 Mode' 0

Streaming to vlc in the same network

Where 192.168.1.2 is the ip address of the viewer,

ffmpeg -f v4l2 -input_format mjpeg -s 800x600 -i /dev/video3 -tune zerolatency  -f mjpeg udp://192.168.1.2:23000?pkt_size=1316

In viewer’s vlc, File -> Open Network udp://@:23000?pkt_size=1316

2021.06.13

NFS provides no encryption (fast) file sharing in local network.

Server

sudo nano /etc/exports

Add line

/full/path 192.168.1.0/24(rw,insecure,no_subtree_check,all_squash,anongid=0,anonuid=0)
  • all_squash,anongid=0,anonuid=0 is required to enable creating file regardless of the user (I think) without it file (with root owner?) cannot be copied

Then

sudo exportfs -ra  # Apply change
showmount -e 192.168.1.2  # Check mounts

Client

sudo apt install nfs-common
sudo mkdir /mnt/mounting_folder
sudo mount -o v3 192.168.1.2:/full/path /mnt/mounting_folder

To retain the change after reboot, put in fstab

sudo nano /etc/fstab

Add line

192.168.1.2:/full/path /mnt/mounting_folder nfs defaults 0 0