Using your Raspberry Pi Zero’s USB wifi adapter as both Wifi client and access point

The Raspberry Pi Zero captivates with its small dimensions. This comes at a cost, however, with only one micro USB port available for peripherals of any kind. In this scenario you’ll probably think twice about what you connect to that port. “A USB hub” may sound like a natural choice but if you’re like me, you’ll want to carry the gadget around a bit and minimize the number of accessories.

Now there are solutions to stack a USB hub onto the Pi Zero, eg. Circuitbeard’s one or Richard Hawthorn’s one, but actually I don’t want to carry around a USB keyboard, especially if I have no HDMI-capable display around at all times. Instead I want to login onto the Pi via Wifi while still having Internet connectivity even when not at home. Thus I want the Pi to be an access point AND maintain a Wifi client connection at the same time. This is rather easy to do with two USB wifi adapters — but with the Pi Zero we’ll have to do with a single one! Continue reading “Using your Raspberry Pi Zero’s USB wifi adapter as both Wifi client and access point”

My SOHO network layer model

In my eyes, it makes sense to divide the elements that are part of a SOHO (small office/home office) network into one of two layers:

Basic network and productive services
Basic network and productive services

In this model, if I were to speak about “the network” I’d mean what I call the basic network: all components that in their togetherness constitute an independent, foundational layer cornered around connectivity and, by comparison, low complexity (ie. no full-blown operating system on each device). This includes the physical LAN cabling (if present), network switches, print servers (usually integrated into the printers), WLAN access points and routers.

Because nowadays it is often essential for system administrators to have Internet access, be it for googling on problems that pop up or for bootstrapping installations that download software directly from the ‘Net (eg. in disaster recovery scenarios when no local mirror is present any more), I consider DNS and DHCP services to be essential enough to be part of the basic network as well.

With the advent of flash-based embedded devices such as WLAN access points and routers, the availability of OpenWrt as a standardized Linux distribution for these and the low resource consumption of DNS/DHCP, migration of these services from hard disk-based servers onto access points/routers became feasible. After all, an access point running on flash memory is much less likely to fail than a full-blown server with hard disks as storage. The only part I’ve seen failing over years with these devices is the $0.05 power supply.

The basic network is foundational in two ways: for one thing, it is independent, ie. can stand on its own. And the productive services layer, that encompasses more value-creating (to the end user) services such as File, Print and E-Mail services, is stacked upon it. No basic network, no productive services. And at the same time: no productive services, no real value in the basic network.

Formulating such a model helps in making up your own mind and communicating with others, eg. about the question where a service such as NTP should be placed. What do you think?

Why Puppet should ship with official modules

Much has certainly already been said about Puppetforge. A year ago, we were promised at Puppetcamp Nuremburg that Puppetforge was likely to improve to a more usable level. But as of now, Puppetforge is much like Github: unless you already know where to look, what to take, you’re pretty much left to your own, to trial and error. Puppetforge does not really help in making decisions on which modules to try. Yes, it is a central location to shop for Puppet modules. But it’s more like a big DropBox.

Of course, other software artefacts have the same or worse problems. For example, there is no central location for C/C++ libraries, so you wouldn’t even know where to begin looking safe for hoping to use the right Google keywords. Still, certain projects such as boost enjoy a certain popularity due to adoption by
known software projects or word of mouth. But the difference is: such libraries, enjoy a much different level of attention than any particular Puppet module. I’ll have a hard time promoting my particular implementation of, say, a ntp module, when there are twenty others.

In “appstores”, such as Google’s Play store, there are facilities that can give at least some advice as to which apps are worth trying. There is a download count, but considered alone it does not indicate quality. After all nothing keeps larger groups of people from making poor decisions lemming-style. That’s why customer reviews (hopefully) provide additional insights, although these could be subject of manipulations more or less easily.

Puppetforge has download counters, but it doesn’t have a comment system. The only way to assess whether this or that module could be a better candidate is the download count and, as a special case possibly the author: Puppet modules published by Puppetlabs themselves might be considered official, popular, well-tested.

That, however, upon further proof stands as a mere assumption. And it leads me to a question that may sound stupid but inevitably comes to my mind:

Why is there no official Puppet modules distribution?

Basically, the Puppet download alone is pretty much useless until you augment it with Puppet modules. To do that, I can go to Puppetforge and make the experience described above. I fail to see the sense behind that.

Yes, I can see that, just because Puppet Labs puts their label on a module, it does not make it automatically better. But that’s not the point, the same argument would hold true for modules published on Puppetforge, as well.

Shipping Puppet with a set of included, “default” Puppet modules would instead have a signaling effect. It would make clear on what code developers should focus, where patches for improvement should be made against. It is not so much about having the best solution. It is about stepping forward and filling a void that can seriously hinder Puppet adoption.

HomeOps: A call for the application of Devops principles at home, too

We all have sort of gained some kind of experience in the IT world, in the way IT works and especially the way it doesn’t work. As an IT professional following recent trends and developments (you do continously keep an eye on them, right?), you will certainly have learned about (or at least heard of) DevOps principles, an ever-growing call for a different mindset on development, operations and silos in companies in general.

Fostered by events such as the series of DevOpsDays conferences spreading around the world, the term “DevOps” has reached a state where it is not only widely misunderstood but also abused to a point where you have to be extra sceptical (a mere buzzword for HP, IBM products, an obscure job definition that distracts from the real challenge).

Many of us already live DevOps principles or at least parts of them if they’re lucky enough to work in a sufficiently agile and wary organization, be it for rather common or specific reasons. Now one reason to especially justify the use of automation as but one DevOps ingredient will certainly be that, put casually, “there is no possible way we would have the time to rebuild this setup manually”. The perseverative Cloud trend practically dictates a more rational approach. So far, so good.

But recently my home server died. That is, the machine at home running mail, file and print services. And guess what? There was no possible way that I had time to rebuild this setup manually!

This is a call for what I call “HomeOps” (in the abscence of a more useful name — HomeDevOps? SOHOOps?). With “HomeOps”, I call for the extension of DevOps concepts to our home IT as far as possible.

Think about it: it is centuries ago that IT people had to deal with the management of IT entities at work only. The times where “Sysadmins” referred to “operating that mainframe at the company” and IT activities at home were limited to comparatively simple home computers are long, long gone.

Nowadays, professional IT staff effictively always deals with two, if not three networks at the same time:

  1. The company’s IT infrastructure.
  2. The (hopefully) always available Internet and our smartphones and tablets
  3. One or more computers, tablets, NAS boxes, “smart” devices such as flatscreens with Internet access, Wifi access points and a router supplying the Internet connectivity at home.

For #1, we’ve learned to apply a DevOps mindset, automation tools, continous X concepts etc., as discussed above.

For #2, I’m not talking about a need for maintenance of mobile networks, I’m talking about the implications of using a Smartphone and the Apps on it. Ever tried to backup your Android phone’s data? Luckily, Google can take care of the most important aspects such as phone numbers, calendar data etc. by promoting Cloud storage. If you trust Google, that is.

For #3, we do what? Face it:

  • Your laptop may have come pre-installed and you may use that installation. As an IT professional, you most probably don’t. How often do you reinstall and how much time does it cost?
  • You may be a Mac user and use Apple’s Time Capsule or you may use a Synology/QNAP/Thecus/Whatever NAS device that gives you a fancy GUI and makes the setup real easy. But what’s a backup worth that does not get controlled? Do you currently really actually monitor its hard disks?
  • Your Internet router may come preconfigured by your ISP. Even if it does, how much fun is configuring port forwarding?

These are, of course, just examples and some of them may apply to your home scenario, some not. In my case, for example, a NAS alone would not be enough, I run a NAS-like device but with an ordinary x86 Linux distribution. Which means it is just as much of an instance that needs management as your cloud VM no. 83424, except that you’d manage it manually. But why?

The key question is: why for heaven’s sake should we do things differently at home?

Of course we know the answer: because the efforts necessary do not seem worth the advantages. And this is where I have come to the conclusion to disagree:

  • Yes, it is “just those few devices”. But the number of devices says nothing about their personal significance to your daily life. If you need to access urgent data, eg. to reply to the tax payer’s office, and the filesystem holding it is not available, have fun!
  • Yes, you may have backed up your data somewhere in the Cloud. This does not mean that setting up things at home in a recovery scenario just became a piece of cake.
  • Don’t be fooled by looking at disaster recovery scenarios (eg. failing hard disks) only. There is one thing that you’re guaranteed to do more often and that is software updates. Unless you’re using a long time support Linux distribution, you’ll be just as much a victim to its software lifecycle as in a company. Compare installing Ubuntu 29 and configuring everything by hand to installing Ubuntu 29 and running your favorite config management tool which has just received Ubuntu 29 support by someone who needed it as well.
  • And, to some probably the strongest argument of all: how does “Some initial work now, much less work later on” sound to your wife, your spouse, your kids? Does your wife have understanding why home IT is perceived as being of less stability than eg. the telephone service?
  • Last not least, playing at home with the same tools you use at work certainly won’t hurt in gaining additional experience.

Yes, I probably won’t go as far and set up a continous deployment toolchain at home (although one could even think about that). But I’m currently automating my home server with Puppet and I’ll certainly blog more on my experiences in doing that. As well as the total “HomeOps” concept that slowly begins emerging before my eyes. Clearly with the goal of a home IT that can rise like a phoenix from its ashes.

I’m not saying this whole HomeOps idea is a “wooza brand new concept”. Or “something big”. Or “something different”. I just find it useful to give things a name to talk and discuss about them.