Showing all posts tagged tech:

For the record

I've done all of this before, but that was a long time ago, and when it came to doing it again, I realised I had forgotten all of it and had to start over. Therefore, for the benefit of my future self and anyone who is trying to do the same thing, here is how to set up a FreeBSD box to act as a firewall and caching name server for its local network.

First things first, you need the hardware. I went with an Intel Atom CPU, because this box runs 24x7 and I wanted something that wouldn't eat too many watts. I sat that in a D525 micro-ATX board and put that in an Antec Mini-Skeleton case. If you haven't seen one, here's what it looks like:

antec-mini-skeleton-scaled500.jpg

The fan on the top is pretty quiet, and lights up blue if you want.

I added a second network interface because one of this machine's jobs is to act as firewall, and off we go.

I'm running FreeBSD because it is as close to a hassle-free OS as I know. It also lets me keep in practice at running a real, non quiche-eating OS, plus has the added benefit of freaking out anyone who asks to use my computer. Between FreeBSD and the keyboard with blank key-caps, most people bail out without even trying.

First things first, I need to get the server to talk to my ISP and to provide IP addresses to the local LAN. On FreeBSD, this is super easy. Just add the following to
rc.conf
:

 ifconfig_rl0="DHCP"

 ifconfig_re0="inet 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255"

 dhcpd_enable="YES"

 dhcpd_ifaces="re0"

 pf_enable="YES"

 pflog_enable="YES"

 gateway_enable="YES"

 named_enable="YES"

 named_auto_forward="yes"

 named_auto_forward_only="yes"

The first line instructs the
rl0
network interface to request its configuration via DHCP. The second line gives a fixed address to interface
re0
.

I wanted a firewall that would let me talk to the outside world, but would not allow any inbound traffic. Since my ISP NATs traffic unles you pay them lots, there is no downside to a complete lock-down. I went with pf, purely because it's hard to replicate in iptables the artistic intent of a pf rule that says
pass out quick on $cheap_gin
. The pf firewall is enabled by the
pf_enable=YES"
line in
rc.conf
, and configured with
pf.conf
. Here's my firewall setup:

 ext_if = "rl0"

 haus_if = "re0"

 haus_ips = "192.168.1.0/24"

 wifi_ips = "192.168.3.0/24"

 priv_nets = "{ 127.0.0.1/8, 192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8 }"

 table <firewall> const { self }

 set loginterface $ext_if

 set skip on lo0

 set skip on plip0

 #antispoof log for $ext_if inet



 scrub in all

 nat on $ext_if from $haus_if:network to any -> ($ext_if)



 block all

 block drop in quick from urpf-failed

 block drop in quick on $ext_if from $priv_nets to any

 block drop out quick on $ext_if from any to $priv_nets

 pass out on $ext_if proto tcp all modulate state flags S/SA

 pass out on $ext_if proto { udp icmp } all modulate state

 pass in on $haus_if from $haus_if:network to any keep state

 pass out on $haus_if from any to $haus_if:network keep state

Simples. I have my two interfaces,
rl0
and
re0
, respectively the one facing teh internets and the one facing the house LAN. Everything from the outside gets dropped, including anything spoofing an address which should be internal, and everying from the inside gets passed, whether to the outside or to another internal network.

Now everything in the house can talk to the internet. Next, DHCP and dynamic DNS. The DHCP server, dhcpd, is started with the
dhcpd_enable="YES"
line from
rc.conf
. This enables the server, and then
dhcpd_ifaces="re0"
, which forces it to listen only on the internal interface. Having dealt with rogue DHCP servers before, I don't want to be guilty of unleashing one. The DHCP server is then configured with
dhcpd.conf
:

 option domain-name "dashaus.lan";

 option domain-name-servers 192.168.1.1;

 option subnet-mask 255.255.255.0;



 default-lease-time 600;

 max-lease-time 7200;

 authoritative;



 ddns-update-style interim;

 ddns-domainname "dashaus.lan";

 ddns-rev-domainname "1.168.192.in-addr.arpa";

 log-facility local7;

 update-static-leases on;

 do-forward-updates true;



 subnet 192.168.1.0 netmask 255.255.255.0 {

      range 192.168.1.2 192.168.1.200;

      option routers 192.168.1.1;

 }

The house domain is
dashaus.lan
, and this is the authoritative DHCP server for the domain. In addition, any device that gets an IP address from this server also gets its hostname resolvable under
dashaus.lan
. This is great for not having to remember which access point has 192.168.1.15, or where the NAS is now. Sure, I could do it with hosts files, but then I'd have to update those, and iOS doesn't do hosts files anyway, so this is better.

Of course this doesn't work alone - you also need a DNS server. I enabled it simply by adding
named_enable="YES"
to
rc.conf
.

And here is my
named.conf
:

 options {

      directory       "/etc/namedb/working";

      pid-file        "/var/run/named/pid";

      dump-file       "/var/dump/named_dump.db";



      statistics-file "/var/stats/named.stats";

      listen-on       { 127.0.0.1; 192.168.1.1; };

      disable-empty-zone "255.255.255.255.IN-ADDR.ARPA";

      disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";

      disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";



      include "/etc/namedb/auto_forward.conf";

 };



 acl dashaus{

      192.168.1.0/24;

      127.0.0.1;

 };



 zone "." { type hint; file "/etc/namedb/named.root"; };

 zone "dashaus.lan" {

      type master;

      file "dashaus";

      allow-update {

           dashaus;

      };

 };



 zone "1.168.192.in-addr.arpa" {

      type master;

      file "dashaus.rev";

      allow-update {

           dashaus;

      };

 };

There's nothing particularly funky going on here. The acl directive specifies that only clients with an IP address in that subnet can update their DNS records. Here are the zone files:

 $ORIGIN .

 $TTL 86400      ; 1 day

 dashaus.lan             IN SOA  skeletor.dashaus.lan. root.skeletor.dashaus.lan. (

                            20011955   ; serial

                            3600       ; refresh (1 hour)

                            900        ; retry (15 minutes)

                            3600000    ; expire (5 weeks 6 days 16 hours)

                            3600       ; minimum (1 hour)

                            )

                    NS      skeletor.dashaus.lan.



 $ORIGIN dashaus.lan.

 $TTL 300        ; 5 minutes

 Apple-TV                A       192.168.1.11

                    TXT     "31da5805e31cba162785449fe301a035f2"

 beast                   A       192.168.1.5

                    TXT     "31140c046d012654084168c75af137a956"

 Claras-iPad             A       192.168.1.31

                    TXT     "31d7c4fd01cef4e2f76a201fdaa8a6e56c"

 dashaus-nas             A       192.168.1.4

                    TXT     "31d02895fbe37aebe514fc5f5bd685b703"

 demonic-iPad            A       192.168.1.14

                    TXT     "31e907692c02809efc782ef4fd60568712"

 Demonic-iPhone          A       192.168.1.7

                    TXT     "312d530aa18c5c3f8da67f54b9a35a938d"

 HPB1251A                A       192.168.1.9

                    TXT     "31a9b7ff798848034e2cf14e05aa6f7648"

 $TTL 86400      ; 1 day

 skeletor                A       192.168.1.1

Skeletor is the server's name, for obvious case-related reasons. Here's the reverse file:

 $ORIGIN .

 $TTL 86400      ; 1 day

 1.168.192.in-addr.arpa  IN SOA  skeletor.dashaus.lan. root.skeletor.dashaus.lan. (

                            20011704   ; serial

                            3600       ; refresh (1 hour)

                            900        ; retry (15 minutes)

                            3600000    ; expire (5 weeks 6 days 16 hours)

                            3600       ; minimum (1 hour)

                            )

                    NS      skeletor.dashaus.lan.



 $ORIGIN 1.168.192.1.168.192.in-addr.arpa.

 $TTL 300        ; 5 minutes

 11                      PTR     Apple-TV.dashaus.lan.

 14                      PTR     demonic-iPad.dashaus.lan.

 31                      PTR     Claras-iPad.dashaus.lan.

 4                       PTR     dashaus-nas.dashaus.lan.

 5                       PTR     beast.dashaus.lan.

 7                       PTR     Demonic-iPhone.dashaus.lan.

 9                       PTR     HPB1251A.dashaus.lan.

This is from a running instance, so you can see the AppleTV, a couple of iPads, an iPhone, the NAS, Beast (my Windows box), and the printer, each with its own IP address. I assume the wifi APs aren't showing up because they haven't refreshed recently, but they're working so I am not going to mess with them!

Last step: as this stands, clients can recognize each other, but Skeletor itself can't resolve other local clients. This is inconvenient if you want to export an X session to yourself and can't remember your IP address. The problem is that the ISP-facing interface is configured via DHCP, so
resolv.conf
gets over-written every time dhclient refreshes - every 1800 seconds, or every half-hour.

The way to fix that is by writing
dhclient.conf
:

 interface "rl0"

 {

      prepend domain-name-servers 127.0.0.1;

      supersede domain-name "dashaus.lan";

 }

This adds the local DNS server before the ones supplied by my ISP, and forces unqualified hostname searches to use the house domain instead of going to the internet.

Now if I could just get an X server running... Everything looks good, but actually starting X puts my monitor to sleep. This looks like a sync out of range issue, but I cannot figure out how to fix it. The really frustrating thing is that I cannot get back to a text console to try again, I actually have to reboot. Fortunately I can get in via SSH to pull logs and do a safe reboot, but it's still far from ideal. The X client is fine - if I fire up an X server somewhere else, I can export apps just fine, which is how I was able to fail
at configuring Totem.

Any X-on-FreeBSD gurus, hit me up!

Adventures in Airplay

I have been trying off and on again to get one of my computers to act as an AirPlay client, that is, so that I could stream content from iPhones and iPads to their screens. The reason is that upstairs I have an AppleTV, but the downstairs TV isn’t able to talk to anything. It’s an older TV – doesn’t even speak HDMI – which is why it was demoted to a backup. However, since it’s just on the other side of a wall from my desk, it’s tethered (via DVI or VGA) to the Windows box.

I used to run Boxee , and all was well. Boxee has a nice iOS remote, which gives my first-gen iPod Touch something to do with itself, and also has an extremely nice feature in a bookmarklet which lets users save videos straight from YouTube or whatever to their Boxee queue. The problem is that Boxee have, in their wisdom, decided to discontinue development of the downloadable version of Boxee in favour of their BoxeeBox hardware. This is a nice enough device, but it’s not worth three AppleTVs in my estimation, especially for a couple of hours’ use a month, which is what I would give it.

Watching local content is as easy sending iTunes over to the secondary monitor and driving it with the Remote app when I want to watch something, but this doesn’t help with YouTube. There is Leanback mode , but that requires more solutions, like the Remote Mouse app, to drive it.

I tried playing with Clik , which is commendably simple: visit the website, it flashes up a QR code; scan the QR code with the iPhone app , and you can browse videos on the iPhone and play them in the browser window. It doesn’t deal with subscriptions, though, and a big goal of the exercise is to be able to watch videos from the /Drive channel, so it’s not ideal.

Next I tried getting one of the computers to act as an AirPlay host. First I tried Windows, simply because the cable already reaches that box, so it requires the least amount of effort. AirMediaPlayer is nice and free, but only lets me view photos, not video – it doesn’t even show up as a host in video or audio mode. That seems to be the only free solution, so that’s Windows out.

Next we try the Mac. This is less than ideal because my Mac is a MacBook Air, so it would require connecting two cables (no HDMI, remember?) each time. However, I assumed that in the Apple world someone must have hacked AirPlay. Sure enough, Erica Sadun had – but it doesn’t work for me.

Finally I got desperate and tried FreeBSD. The Totem player has a plugin for AirPlay, so full of hope I spent quite a lot of time downloading Totem and sorting out its dependencies, then getting Git and its dependencies, and finally found that… it doesn’t work: Totem-WARNING **: Error, impossible to activate plugin ‘AirPlay Support 1.0.2′. Joy.

So it looks like that’s it. Unless something changes, I’m going to wait for the Raspberry Pi and try that. Any suggestions, drop me a line.

Multi-hypervisor

Siccome non mi riesce di commentare sul sito originale, riporto qui un post interessante con i miei commenti.

Uno dei trend visti nell’ultimo anno (ma in realtà iniziato qualche anno fa) è la crescita dell’ecosistema legato alla virtualizzazione (ossia tutti quei prodotti e vendor complementari ai prodotti di virtualizzazione veri e propri) al di fuori dei confini nei quali sono storiacamente nati: molti dei partner storici di VMware ora hanno esteso le loro soluzioni anche ad altri hypervisor, e nuovi prodotti sono nati specifici per gestire ambienti virtuali complessi o quanto meno eterogeni. Una bella definizione a questo fenomeno è stata data da VKernel nel suo post: "Hypervisor Agnosticism
".

Bisogna però specificare che non stiamo parlando di ottenre l’interoperabilità tra i vari tool di virtualizzazione, ma semplicemente l’utilizzo di tool comuni per alcuni particolari compiti, tipicamente la gestione, il monitoraggio e la protezione dei dati.

Esattamente d'accordo: il supporto multi-hypervisor non significa necessariamente piena interoperabilità, ma semplicemente un livello di astrazione che permette di portare a termine un compito senza dover scendere nel dettaglio di ciascuna tecnologia.

L'interoperabilità peraltro richiederebbe accordi fra concorrenti in un mercato che è ancora in rapida evoluzione, e rischierebbe così di rallentare o limitare la sana concorrenza.

Ci si potrebbe chiedere se ha senso e se può portare qualche beneficio? Probabilmente per il singolo cliente no… che motivo potrebbe avere per introdurre nuovi costi legati all’ambiente eterogeneo (benché alcuni strumenti possono essere comuni, il formato delle VM, la loro mobilità, le competenze richieste, buona parte delle attività di amministrazione, … saranno diverse per ogni prodotto di virtualizzazione), costo per realtà medio-piccole non sarebbe facilmente giustificabile.

Su quest'altro punto però sono meno d'accordo: mentre per le piccole aziende è vero che ha senso concentrare le energie su un'unica piattaforma, già per le medie può succedere ad esempio che partano due progetti di virtualizzazione in parallelo, uno nel team Windows ed uno nel team Linux o Unix. Invece di obbligare tutti a convergere su un'unica piattaforma, può invece aver senso continuare ad usare ciascuna piattaforma per i propri punti di forza ed implementare un livello di astrazione che permetta la gestione e la visibilità unificata su tutte le diverse piattaforme.

C'è poi la dimensione dell'evoluzione da considerare. Se io oggi decido di concentrare tutte le mie energie sulla piattaforma A, ma domani mi fondo con, compro, o vengo comprato da una società che invece ha scelto la piattaforma B, potrei trovarmi in difficoltà. La fusione di sistemi e processi IT in questi casi è già abbastanza complessa senza mettere in campo anche una migrazione di piattaforma, per cui sarebbe utile avere a disposizione una piattaforma di astrazione che mi permetta di gestire nell'immediato le mie esigenze business e prendere con calma la decisione di migrare o meno le piattaforme di virtualizzazione.

Esiste infine anche la dimensione economica: se mi sono focalizzato su una sola piattaforma, ed improvvisamente il fornitore raddoppia i prezzi, non ho molte alternative. Se invece ho già in casa il materiale e le competenze su un'altra piattaforma, la migrazione è, se non indolore, comunque molto più semplice.