AppData user installations


I prefer and recommend to install software not from the operating system vendor in userspace than in the os as the administrator account, unless in a few cases where the machine is espcially set up to run the software upon boot as a server.

This is more true even and especially for a single or family user system where that user can easily get to the root account.

This is more difficult on microsoft windows as many software presume admin and make the person work harder to force it into userspace. It is worthwhile though, as much software once installed will just stay there, and it is fairly rare to introduce a new package.

In userspace, the software never gets admin access, so it cannot introduce bugs into the host operating system and break updates.

Even if it is just bugs, the system owner cannot rule out foreign software interference in the operating system if it had access.

The user is not pestered for admin access to let the software update, it can just do it. Therefore the user can be far,far more suspicious if the "admin rights" dialog appears, the correct answer defaulting to being "no!"

On windows, where introducing new packages is the exception, and the userspace software location is in the hidden AppData/Local/Programs location, use Software Restriciton policies to lock out running anything in other places such as, and especially Downloads, also consider Documents and Desktop, the .exe must then be manually moved before being runnable.

There is maybe one disadvantage, a multi user system might have several copies of the same software. Each user cannot then break each others install though.

On Security

The impact of security vulnerabilities is also moved from the scope of the whole host into the executing user. The damage is limited to the user account, rather than catestrophicaly take over the whole system.

This gives the host operating system one last chance to take down software gone rogue.

This part is an improvement, though know because user accounts hold sensitive personal data, it should be improved further by pushing the software into a third layer, basically a container under the user account that cannot access files outside of the container, though it may have network access that the main user account does not have.

In windows this is still very early, as Application Guard exists to do, and for Debian there is control groups such as via systemd --user, and nspawn for system wide


There is some literature from some leading operating systems justifying this:

What about application guard?

It is useful to mention that application guard could be considered equivalent to systemd containers in microsoft windows, the next issue is telling apart applications inside app guard from those outside, to be able to deny internet access to an unguarded browser or application.

It is necesseary to unravel the references to "Enterprise",

Userland install for Debian

When there are packages "for" debian than don't have a really good reason, such as interacting with hardware, to be installed in the root account, create a fake debian environment in a systemd --user unit, and install the foreign packages into that. This keeps the host clean(ish) and can allow the proprietary package also to keep up to date.

We set up an environment for google, and another for microsoft. Google contains chrome and extras like earth. Microsoft contains edge and extras like skype and teams.

I managed to do away with debootstrap as the foreign packages generally do not need a full fake environment, the aim is only to get them to install and apply vendor updates whenever they are released.

So, in the real home directory we have ~/.local/internet/${VENDOR}/ where vendor could be google or microsoft, or others

Each contains an inner .local subdirectory which is the base of the fake root.

~/.local/internet/${VENDOR}/.local/etc/apt/apt.conf is used to make apt and dpkg work with the fakeroot, as can be seen set all sorts of nasty stuff with the aim to force success.

  1. Dir "/home/user/.local";
  2. APT::Get::Fix-Missing false;
  3. APT::Get::Fix-Broken false;
  4. DPkg {
  5. Options "--force-all";"--root=/home/user/.local";
  6. Path "";
  7. pkgProblemResolver::Scores::Depends "0";
  8. Dir::State::status "/home/user/.local/var/lib/dpkg/status";

Each environment may need a .debconf for further forcing

  1. < /etc/debconf.conf sed -s s_/var/cache/debconf_$HOME/.local/var/cache/debconf_g > ~/.local/internet/${VENDOR}/.debconfrc

And a .dpkg.cfg

  1. echo root=${HOME}/.local > ${HOME}/.local/internet/${VENDOR}/.dpkg.cfg
  2. echo admindir=${HOME}/.local/var/lib/dpkg >> ${HOME}/.local/internet/${VENDOR}/.dpkg.cfg
  3. echo log=${HOME}/.local/var/log/dpkg.lo >> ${HOME}/.local/internet/${VENDOR}/.dpkg.cfg
  4. echo force-all >> ${HOME}/.local/internet/${VENDOR}/.dpkg.cfg

Various binaries that deb packages try to run in postinst phase or similar, are symlinked to /bin/true, as these may only cause automated install/upgrade/refresh to break when the main reason is to unpack and place the updated content.

  1. ln --symbolic /bin/true ~/.local/internet/${VENDOR}/.local/bin/ldconfig
  2. ln --symbolic /bin/true ~/.local/internet/${VENDOR}/.local/bin/start-stop-daemon
  3. ln --symbolic /bin/true ~/.local/internet/${VENDOR}/.local/usr/bin/update-alternatives
  4. ln --symbolic /bin/true ~/.local/internet/${VENDOR}/.local/usr/bin/update-menus

Have ~/.local/internet/${VENDOR}/.local/usr/bin/apt-config

  1. #!/bin/sh
  2. export PATH=$(echo $PATH | cut -d: -f3-)
  3. if test "$1" = "shell"
  4. then
  5. echo DEFAULTS_FILE=/dev/null
  6. fi
  7. $(command -v apt-config) "${@}"

The most difficult part was identiying the hack needed for an executable to force its own caller to exit successfully, this goes in .local/usr/bin/xdg-icon-resource and is targeted at the postinst dpkg scripts of google chrome and microsoft edge, to convince dpkg that they and others are installed correctly.

  1. #!/bin/sh
  2. # force caller to exit with status 0
  3. # intended to force the dpkg invoked scripts to exit successfully
  4. # dpkg hook may be more correct
  6. echo exiting pid $PPID
  7. readlink -f /proc/$PPID/exe
  8. env
  9. echo 'call (void)exit(0)' | gdb -q -p $PPID

Finally I cloned the host's dpkg status so that the fakedeb believed that many host packages were installed, this is only to satisfy dependencies of the foreign packages.

  1. cat /var/lib/dpkg/status > ~/.local/internet/${VENDOR}/.local/var/lib/dpkg/status

Then, enter the fake environment with ideally a segregated network namespace, systemd-networkd on my host is set to recognise new br members.

For networkd we have /etc/systemd/network/ on the host, using a pattern of rt20_* as an example

  1. [Match]
  2. Name=rt20_*
  3. [Link]
  4. RequiredForOnline=no
  5. MTUBytes=65535
  6. [Network]
  7. Bridge=br
  8. DHCP=no
  9. LinkLocalAddressing=no
  10. [BridgeVLAN]
  11. VLAN=20
  12. EgressUntagged=20
  13. PVID=20

Now define a systemd unit with the basics for a new ~/.config/systemd/user/vendor.service

Later, when everything is working, the sleep 86400 could be replaced with the actual app launch sequence

  1. [Service]
  2. Type=simple
  3. #SecureBits=+keep-caps
  4. AmbientCapabilities=~
  5. PrivateTmp=true
  6. PrivateUsers=true
  7. PrivateNetwork=true
  8. PrivateMounts=true
  9. MountAPIVFS=true
  10. SystemCallFilter=~lchown:0 ~fchown:0
  11. TemporaryFileSystem=/run/systemd/resolve/ /var/lib/dhcp/ /etc/bird/ /run/bird/ /var/lib/samba/
  12. BindPaths=%h/.local/internet/${VENDOR}:%h
  13. BindPaths=/tmp/.X11-unix
  14. BindPaths=/tmp/krb5cc_%U
  15. ExecStart=-/bin/sh -c '\
  16. whoami;\
  17. VLAN=20;\
  18. NS=$(printf \'%%08x\' $(( $(ps -p $$$$ -o netns=) ^ 0xf0000000 )));\
  19. MAC=$(echo -n $NS | fold -w 2 | tr "\n" ":");\
  20. ip link add br$${VLAN} address 02:4e:$${MAC} mtu 65535 up type veth peer name rt$${VLAN}_$${NS} netns 1;\
  21. /sbin/dhclient br20;\
  22. until ip route get; do sleep 1; done;
  23. ip route replace $(ip route | grep default) mtu 1500;
  24. # if the app is served from a local repo, prime it here
  25. export APT_CONFIG=${HOME}/.local/etc/apt/apt.conf PATH=${HOME}/.local/usr/bin:${PATH};\
  26. apt-get update;\
  27. for N in passwd group; do cp /etc/$N /tmp/$N; done;\
  28. sed -si \'s_:0:0:_:%U:%U:_g;\' /tmp/passwd;\
  29. sed -si \'s_:0:_:%U:_g;\' /tmp/group;\
  30. for N in passwd group; do mount --bind /tmp/$N /etc/$N; done;\
  31. apt-get -y upgrade;\
  32. umount /etc/passwd /etc/group;\
  33. apt-get -y autoclean;\
  34. apt-get -y clean;\
  35. sleep 86400;\
  36. '

Initially starting the unit should cause it to hold in the sleep for a while, to allow the final bits to be checked

As a fake APT is, really complicated, I use a function to enter the namespace for setup and troubleshooting, it is much like firejail --join

  1. enterit ()
  2. {
  3. UPID=$(systemctl --user status "${1}" | grep '^ Main PID' | cut -c14- | cut -d" " -f1);
  4. nsenter --preserve-credentials -U -t "${UPID}" -n -m
  5. }

Once inside, such as enterit vendor, firstly it is necessary to neutralise chown syscalls like SystemCallFilter=~lchown:0 ~fchown:0, so I paste this lot:

  1. preload () {
  2. gcc -fPIC -Wall -Wextra \
  3. -g -O2 -fstack-protector --param=ssp-buffer-size=4 \
  4. -Wformat -Werror=format-security -Wall -ldl \
  5. -shared -x c -o /dev/stdout /dev/stdin
  6. }
  7. echo "\
  8. int chown(const char *pathname, ...) { pathname; return 0; }\
  9. int fchown(int fd, ...) { ~fd; return 0; }\
  10. int lchown(const char *pathname, ...) { pathname; return 0; }\
  11. " | preload > /tmp/
  12. export LD_PRELOAD=/tmp/

This LD_PRELOAD thus makes chown calls done by dpkg do nothing successfully so it does not crash.

Then, setup APT, update the application via apt-get or aptitude, then test launching it.

  1. export APT_CONFIG=${HOME}/.local/etc/apt/apt.conf PATH=${HOME}/.local/usr/bin:${PATH};

Foreign .deb sources without APT

Many such sources supply a URL that serves a http redirect (called WHAT= here), to another URL in a cdn that serves the actual content, called WHERE= here

We need an rsa keypair for this, use the host ssh one as key rotation done from time to time.

A line in the containers sources.list to tell it where to pull:

  1. echo deb copy://${HOME}/.local/mirror/https/${WHERE} ./ >> ${HOME}/.local/internet/${VENDOR}/.local/etc/apt/sources.list.d/${VENDOR}.list

Then a script that the container unit can call just before running apt.

  1. R=`/usr/bin/curl --create-dirs -D /dev/stdout --output \
  2. ~/.local/redirect/https/${WHAT} https://${WHAT}`
  3. attr -s http -V "${R}" ${HOME}/.local/redirect/https/${WHAT}
  4. L=`attr -g http ${HOME}/.local/redirect/https/${WHAT} | \
  5. grep -i ^location | cut -d" " -f2- | tr -d [:cntrl:]`
  6. W=`2>&1 wget --directory-prefix=${HOME}/.local/mirror --timestamping --force-directories \
  7. --protocol-directories --server-response --xattr "${L}"`
  8. N="$(echo "${W}" | sed -s 's_^Saving to_\x0_g' | head -z -n 1)"
  9. R="$(echo "${N}" | grep "^ " | cut -c3-)"
  10. P="$(echo ${L} | sed s_https://_.local/mirror/https/_g)"
  11. attr -s http -V "${R}" "${HOME}/${P}"
  12. OPWD=$PWD
  13. cd ~/.local/mirror/https/${WHERE}
  14. dpkg-scanpackages -m . > Packages
  15. < Packages gzip > Packages.gz
  16. < Packages xz > Packages.xz
  17. < Packages bzip2 > Packages.bz2
  18. echo Origin: Vendor >>Release
  19. echo Label: Vendor >>Release
  20. echo Suite: stable >>Release
  21. echo Architectures: amd64 i386 >>Release
  22. echo Components: non-free >>Release
  23. apt-ftparchive release . >>Release
  24. # til pem2openpgp returns
  25. ar p mirror/http/ data.tar.xz \
  26. | tar -xJO ./usr/share/monkeysphere/keytrans > /tmp/pem2openpgp
  27. PEM2OPENPGP_USAGE_FLAGS=certify,sign perl -T /tmp/pem2openpgp www < ~/.local/etc/ssh/ssh_host_rsa_key.pem \
  28. | gpg --import
  29. gpg --local-user www -v -abs --yes -o Release.gpg Release
  30. gpg -v --no-armor --output ${HOME}/.local/etc/apt/trusted.gpg.d/${VENDOR}.gpg --yes --export www
  31. cd $OPWD

To install, apt gets a sources.list pointing at the directory whereas /etc/apt/trusted.gpg.d/ gets a symlink at trusted.gpg

Misc root netlink detection

Here are the 2 units of the helper /etc/systemd/system/netlink1.socket

The route 1 refers to /usr/include/linux/rtnetlink.h 1 = RTMGRP_LINK

  1. [Unit]
  2. Description=configure ipv6 routes on interface changes.
  3. [Socket]
  4. ListenNetlink=route 1
  5. [Install]

the /etc/systemd/system/netlink1.service unit, hunts down all active network namespace and do any custom config that has to happen from root.

In this example, set container own mtu to 65535, as if not needing to cross hardware we can have a huge mtu so just max it out.

Reachability of the internet limited to 1500, and reachability of a site router to some value in between depending on hardware support.

For now discard details from the netlink socket, as link adds and removes are not so frequent to make it essential, though future enhancements could process it for better efficiency.

  1. [Unit]
  2. Description=Add ipv6 route to netns
  3. [Service]
  4. ExecStart=/bin/sh -c '\
  5. while test $(socat -T 1 FD:3 - | wc -c) -gt 0;\
  6. do;\
  7. for N in $(pidof /sbin/dhclient);\
  8. do\
  9. if test $(readlink /proc/$N/ns/net) != $(readlink /proc/1/ns/net);\
  10. then \
  11. nsenter -n -t $N ip -6 link set dev eth0 mtu 65535; \
  12. nsenter -n -t $N ip -6 route replace 2000::/3 via fe80::aede:48ff:fe23:4567 dev eth0 mtu 1500; \
  13. nsenter -n -t $N ip -6 route replace fec0::20:aede:48ff:fe23:4567/128 dev eth0 via fe80::aede:48ff:fe23:4567 mtu 6128; \
  14. >&3 nsenter -n -t $N ip -6 route show; \
  15. nsenter -n -t $N dhclient eth0;
  16. fi; \
  17. done; \
  18. done; \
  19. '

Microsoft Edge and IPv6

Both chromium and google chrome support IPv6 well when run like this, despite some things like probing for google public dns, however msedge refused to use ipv6 with a typical 2000::/3 route. I discovered that edge may checks for a default IPv6 route, so to add it, a workaround may be applied inside the edge container or jail:

  1. ip -6 route add default dev lo

Google Earth AppData install - windows

it is very desirable to install as a non-admin to %LOCALAPPDATA%\Programs\Google as this allows the program to patch without requesting the Administrator password, especially on a single user system. Unfortuately by default the installer will demand the administrator password and not install without it, so try to find a workaround.

Doing the install completely in userspace is better for the system for its maintainability, and also any system issues could not be attributed on google earth.

Normally earth download is excepted to work.

Had downloaded latest=v7.3.1-x64 from direct links. Google may have fixed by the time this is read.

The installer writes the msi file as a GE*.tmp file to %temp%, we quickly snatch it before it is deleted. e.g. rename %temp%\GE*.tmp GE.msi

Now invoke msiexec with this msi file to force a peruser install:

If there is a systemwide install of google earth, remove it firstly so that msi does not refuse to install peruser instances.

  1. Obtain the latest direct links installer
  2. If %USERPROFILE%\Downloads is protected from accidental malware execution, move the .exe elsewhere within the %USERPROFILE%
  3. set __COMPAT_LAYER=RUNASINVOKER then execute the installer from the unprivileged user account.
  4. the installer will likely offer to install to C:\Program Files\Google\Google Earth Pro\
  5. proceed to pretend to install, this would deliberately fail as it is being installed from unprivileged account, the real idea is to take a copy of the MSI:
  6. copy %temp%\GoogleEarth-EC-x64.msi GoogleEarth-EC-x64.msi
  7. Once the msi is copied the product can be installed or upgraded for real
  8. msiexec /log test.txt /i GoogleEarth-EC-x64.msi ALLUSERS=2 MSIINSTALLPERUSER=1
  9. Then enjoy the newly installed or upgraded Google Earth without it having had access to the Administrator account
  10. %USERPROFILE%\AppData\Local\Programs\Google\Google Earth Pro\client\googleearth.exe

It still wants to write to \windows

Unfortnately a lot of .msi such as wireshark, depend on an optional windows module called visual c++ redistributable, that may not be present o the system at the wanted version, and not deployed automatically via windows update.

The workaround is getting the redists rquested by the application, directly from Microsoft, and then installing those via the Administrator account, even though should be a windows update thing really

Then, for wireshark, use pktmon instead for pacet capture.

A variation of this trick seems to work with sonos desktop controller, extract the msi and run it as above, instead of C:\Program Files (x86)\Sonos\, sonos destop installer correctly offers to install install to C:\Users\user\AppData\Local\Programs\Sonos\

Installer tries to place C:\windows\syswow64\msvcr100d.dll, this may be 32-bit visual runtimne. so checked system for latest runtime, and grabbed 32-bit version for runtime 2013 on windows 10.

A similar method is posisble for userspace debian, for using google earth or chrome deb packages in userspace:

  1. WHERE=`wget -S -o /dev/stdout -O /dev/null --max-redirect=0 | grep "^ Location:" | cut -d" " -f4-`
  2. fakeroot fakechroot /usr/sbin/debootstrap --variant=fakechroot stable /tmp/test ${WHERE}
  3. PATH=$PATH:/sbin:/usr/sbin fakeroot fakechroot dpkg --root=/tmp/test -i
  4. PATH=$PATH:/sbin:/usr/sbin fakeroot fakechroot apt-get -o Dir=/tmp/test --fix-broken install
  5. PATH=$PATH:/sbin:/usr/sbin fakeroot fakechroot chroot /tmp/test

Adobe flash dll would be installed to: C:\Users\user\AppData\Local\Mozilla Firefox\browser\plugins\

msiexec /log test.txt /i flashplayer_win.msi ALLUSERS=2 MSIINSTALLPERUSER=1

Regarding unshare -ncm --keep-caps

I am upgrading to use of systemd userspace unit, it makes a private network namespace, then a veth instance creates a link to the outside world.

The "netns 1" being systemd attaches it to the root namespace where systemd-networkd attaches it to the host bridge.

Then, the user enjoys the ability to configure networking within the namespace. I run dhclient, which can call bird in its scripts, pulling in routes by connecting to a passive BGP server. The container is then ready to use although could be hardened further by dropping more privileges before runnig the target program, such as using a new pid namespace to prevent access to patch in extra veth to the host.

In this example I also use the hexadecimal form of network namespace number as part of the container mac address, this appears to guarantee no conflicts.


Thus this goes in ~/.config/systemd/user/example.service

Units can still be inspected from outside via nsenter

The bindpaths allows a virtual homedirectory for each app, it may be useful to group them by vendor, so google for chrome, and microsoft for edge

replace xlogo with follow on commands, for chromium derivatives example common for both chrome and edge, start by forcing an update:

Hugepages are also activated for performance, as well as that they are already used for virtual machines

  1. export APT_CONFIG=${HOME}/.local/etc/apt/apt.conf PATH=${HOME}/.local/usr/bin:${PATH};\
  2. apt-get update;\
  3. apt-get -y upgrade;\
  4. apt-get -y autoclean;\
  5. apt-get -y clean;\
  6. export CHROME_FLAGS="" SSLKEYLOGFILE=%h/sslkeyfile;\
  7. export HUGETLB_MORECORE=yes HUGETLB_SHM=yes;\
  8. export HUGETLB_RESTRICT_EXE=google-chrome:chrome:chrome_crashpad:microsoft-edge:msedge:msedge_crashpad:nacl_helper;\

Start chrome

  1. if grep -q gtk-application-prefer-dark-theme=1 ~/.config/gtk-3.0/settings.ini;\
  2. then %h/.local/opt/google/chrome/google-%N --enable-features=WebUIDarkMode --force-dark-mode;\
  3. else %h/.local/opt/google/chrome/google-%N;\
  4. fi;\

Or start edge

  1. if grep -q gtk-application-prefer-dark-theme=1 ~/.config/gtk-3.0/settings.ini;\
  2. then %h/.local/opt/microsoft/msedge/microsoft-edge --enable-features=WebUIDarkMode --force-dark-mode;\
  3. else %h/.local/opt/microsoft/msedge/microsoft-edge;\
  4. fi;\