On this system, I like to have general system stability, but still have the latest software packages available to be selected when I want to use the new software. This is under the principle that newer versions is generally supposed to mean better, even though there is some risk with untested software.
To provide both the known stable and new software, I list all the version levels of software in my /etc/apt/sources.list file. However, just doing this would cause the system to upgrade everything to unstable or experimental packages. I want to control what goes this far, and leave everything else on stable.
This control is done by version pinning. I make a file called /etc/apt/preferences that prevents anything newer than stable being automatically upgraded whenever I do any package installation. This not only reduces the risk of the system breaking unexpectedly, it also reduces the frequency of the updates. This should encourage you to run the updates more often, and hence apply security updates more quickly after release. The downside is that you'll have to manually select the latest version for every package in an install if that is what you want..
Explanation: see http://www.argon.org/~roderick/apt-pinning.html Package: * Pin: release a=testing Pin-Priority: -1 Package: * Pin: release a=unstable Pin-Priority: -1 Package: * Pin: release a=sarge Pin-Priority: -1 Package: * Pin: release a=sid Pin-Priority: -1 Package: * Pin: release a=experimental Pin-Priority: -1
I now have quite a few computers in my house, many having a debian installation on them. They all share the single ADSL line we have coming in, so when I update their package lists, they come in quite slowly at the ADSL wire speed of 512k. As I've already downloaded them once, can't they be shared faster between the respective machines?
I've installed Squid and Frox on my internet gateway to provide proxy caching. It is necessary to create a /etc/apt/apt.conf.d/proxy file or similar, to tell APT to use them instead of trying to download the packages directly.
Acquire::http { Proxy "http://server.example:3128"; Timeout "120"; Pipeline-Depth "5"; // Cache Control. Note these do not work with Squid 2.0.2 No-Cache "false"; Max-Age "86400"; // 1 Day age on index files No-Store "false"; // Prevent the cache from storing archives } Acquire { ftp { Proxy "ftp://$(PROXY_USER):$(PROXY_PASS)@server.example:2121/"; ProxyLogin { // "USER $(PROXY_USER)"; // "PASS $(PROXY_PASS)"; "USER $(SITE_USER)@$(SITE):$(SITE_PORT)"; "PASS $(SITE_PASS)"; } Passive "true"; Proxy::Passive "true"; }; };
You can replace server.example with the full hostname of your proxy. Squid and Frox need their own setup on your proxy.
As I like using IPv6 whenever possible, so I have configured IPv6 squid which you can install via aptitude. You'll need to edit /etc/squid.conf to let your clients use the proxy.
Look for the line that says INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS and immediately under it, add acl lines for all your public and private IPv4 subnets and any public IPv6 subnets. Each public IPv4 subnet also gets a public IPv6 subnet. So if you wrote....
acl net_ipv4 src ::ffff:192.0.2.0/120 http_access allow net_ipv4
Which means, let the addresses in 192.0.2.0/24 use your proxy. 120 comes from expr 96 + 24. You would also add:
acl net_ipv6 src 2002:C000:0200::/40 http_access allow net_ipv6
This lets those same addresses (and any hosts that they act as routers for) use your proxy via 6to4. Provided your proxy's public address is included in this range, any machines it acts as a 6to4 router for, will also be able to use the proxy. This is especially useful if you have to use NAT and many of your other machines get private IPv4 addresses. The 40 comes from 16 + 24.
You may also want to up the size of squid's cache to be reasonable that of your hard disk. I suggest changing the maximum_object_size to over 100 MB or even 1000 MB to ensure the largest .debs are cached. If you have lots of RAM you may like to up the memory cache size too.
Also, look for the cache_dir line, the first number after the cache directory in your filesystem is the maximum size of the cache in megabytes. Make this number as large as you like, but not larger than the free capacity of the filesystem the cache is stored on.10000 would be around 10 Gigabytes.
At this point you may like to arrange for web browsers etc, to find and use squid automatically. You can use Web Proxy Auto Discovery to have them download a configuration file. Add the following to /etc/dhcp3/dhcpd.conf
option wpad code 252 = text; option wpad "http://wpad.example/wpad.dat\n"; option domain-name "example";
Where example is replaced by your domain name. I recommend this as some clients will actually fall back to looking for http://wpad/wpad.dat in the supplied domain-name if 252 style discovery doesn't work. A bit like the reason why many sites call their favourites icon favicon.ico
You now need to make wpad point to your web server, e.g. by DNS CNAME, and setup a virtual host there. If using apache2, it could serve a single directory containing /wpad.dat.pac and MultiViews, the .pac extension ensuring that the correct Mime Type is returned. Thus avoiding the need to edit mime.types!
The ftp proxy Frox also needs to be told about who to let use it, and how much to cache. To do this, edit /etc/frox.conf and under the line that says # ACL Allow * - * you add a line that lets your machines use it. E.g.
ACL Allow 192.0.2.0/24 - * *
This lets the subnet 192.0.2.0/24 use frox. You also get a choice between local and http (e.g. via squid) based proxying. Unless you really need to send frox via squid, the local option is more efficient. Just up the CacheSize to the maximum amount of ftp data in megabytes you want frox to store. I also ensure the DoNTP on line is uncommented.
It is possible to fetch a few images to run via PXE
#!/bin/bash cd /tftpboot/ for N in i386 amd64 do rm netboot.tar.gz wget -c ftp://ftp.uk.debian.org/debian/dists/stable/main/installer-$N/current/images/netboot/gtk/netboot.tar.gz rm -r debian-installer/$N tar -xvzf netboot.tar.gz ./debian-installer/$N rm netboot.tar.gz done for N in i386 amd64 do rm netboot.tar.gz wget -c http://gb.archive.ubuntu.com/ubuntu/dists/lucid/main/installer-$N/current/images/netboot/netboot.tar.gz rm -r ubuntu-installer/$N tar -xvzf netboot.tar.gz ./ubuntu-installer/$N rm netboot.tar.gz done rsync -v -r --exclude=$'*.iso' --exclude=$'/OldFiles' --exclude=$'*amd64*' --exclude=$'*i486*' --exclude=$'/*/*' \ rsync://rsync.mirrorservice.org/download.sourceforge.net/pub/sourceforge/c/project/cl/clonezilla/clonezilla_live_stable/ . unzip -j clonezilla-live-*.zip live/vmlinuz live/initrd.img live/filesystem.squashfs -d /tftpboot/clonezilla/i686
See Mirroring Guide for instructions
Now we can do mirroring, ftpsync is not packaged so we mirror that as well.
git clone --mirror {https://,/mirror/https/}ftp-master.debian.org/git/archvsync.git git clone {/mirror,/union}/https/ftp-master.debian.org/git/archvsync.git /union/https/ftp-master.debian.org/git/archvsync.git/bin/ftpsync sync:archive:debian-security /union/https/ftp-master.debian.org/git/archvsync.git/bin/ftpsync sync:archive:debian
Or use debmirror for a smaller mirror to just carry stable:
debmirror -v --method=rsync --host=rsync.example --arch=i386,amd64 --check-gpg -d stable/updates --root=debian-security /mirror/http/security.debian.org debmirror -v --method=rsync --host=rsync.example --arch=i386,amd64 --check-gpg -d stable,stable-updates /mirror/ftp/ftp.debian.org/debian
Once we have package lists, we may have already downloaded some of the debs locally, so prepopulate the mirror to try to cut the download a little.
while read do FILE="${REPLY:10}" SRC=/var/cache/apt/archives/"${FILE#pool/*/*/*/*}" DEST=/mirror/ftp/ftp.debian.org/debian/"${FILE}" if test -e "${SRC}" -a ! -e "${DEST}" then mkdir -p `dirname "${DEST}"` cp -v "${SRC}" "${DEST}" fi done <<<"`cat /mirror/ftp/ftp.debian.org/debian/dists/stable/main/binary-{i386,amd64}/Packages.gz | gunzip | grep \"^Filename: \"`"