Systemd silence

Experiment used on asus p9d ws

Admin's own units go in /etc/systemd/system/, if maintaining someone elses system, use /run/systemd/system/ if programmatically generated, otherwise /lib/systemd/system/ and the system's package manager.

First lets stop the fans.

Having put operating system on SSD, have some disks for back up and less important applications.

Make some units for starting and stopping a imsm backed raid array, lets call it silencedisks.service

the umount catches "most" situtions where the mount is held open in other mount namespaces

  1. [Unit]
  2. Description=disks silent mode
  3. Conflicts=various.automount automated-backups.service automated-backups.timer
  4. ExecStart=-/bin/sh -c "\
  5. for N in /proc/[0-9]*/ns/mnt; do umount -q --recursive -A diskoverlay overlay -N $N; done;\
  6. for N in /proc/[0-9]*/ns/mnt; do umount -q --recursive -A /dev/mapper/n-* -N $N; done;\
  7. modprobe -r drivetemp;\
  8. until /sbin/vgchange -a n n; test -z \"$(find /dev/mapper/ -name n-*)\"; do sleep 1; done;\
  9. /sbin/mdadm --stop /dev/md/[nug]*;\
  10. /sbin/mdadm --stop /dev/md/imsm*;\
  11. echo 1 > /sys/devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/delete;\
  12. echo 1 > /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/delete;\
  13. "

First, the silence unit conflicts with any users of the spinning disk logical volume

systemd requires them to stop when starting this unit.

Next, use vgchange to deactivate the volume group, example called nkeep polling to see that it has removed its mappings before proceeding

Once the volume group has confirmed stopped, go on and deactivate the backing mdadm arrays, here they are intel matrixraid so stop the contents, then the containers.

The mdadm commands actually fail if the array is still in use, all my drives have imsm containers even if solitary devices.

Once the spinning arrays are stopped, detach the scsi targets. Linux issues the spin-down commands. This has been found to be better than issue spindown command via hdparm.

Silence

To reverse the situation, bring the arrays on disks to running:

This begins with scan sata bus where the drives are, to re-attach the targets. The scan commands can be done in parallel to enjoy the sound of drives spinning up together, then wait til both are in ready state, then assemble raid and lvm on top of that.

  1. [Unit]
  2. Description=disks running mode
  3. # /dev/disk/by-path/pci-0000:00:1f.2-ata-?
  4. # /sbin/hdparm -B 128 -M 128 -S 254 /dev/disk/by-id/ata-*;\
  5. # for N in /dev/disk/by-path/pci-*-ata-[23456]; do C=\"${C} /sbin/hdparm --idle-immediate ${N} &\"; done;\
  6. # eval \"${C}\"; wait;\
  7. [Service]
  8. ExecStart=/bin/sh -c "\
  9. echo - - - > /sys/devices/pci0000:00/0000:00:1f.2/ata3/host2/scsi_host/host2/scan&\
  10. echo - - - > /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/scsi_host/host3/scan&\
  11. wait;\
  12. /sbin/mdadm --assemble --scan;\
  13. /sbin/mdadm --run --scan /dev/md/imsm?;\
  14. until vgs n; do sleep 1; done;\
  15. vgchange -aay;\
  16. "