Using your Raspberry Pi Zero’s USB wifi adapter as both Wifi client and access point

The Raspberry Pi Zero captivates with its small dimensions. This comes at a cost, however, with only one micro USB port available for peripherals of any kind. In this scenario you’ll probably think twice about what you connect to that port. “A USB hub” may sound like a natural choice but if you’re like me, you’ll want to carry the gadget around a bit and minimize the number of accessories.

Now there are solutions to stack a USB hub onto the Pi Zero, eg. Circuitbeard’s one or Richard Hawthorn’s one, but actually I don’t want to carry around a USB keyboard, especially if I have no HDMI-capable display around at all times. Instead I want to login onto the Pi via Wifi while still having Internet connectivity even when not at home. Thus I want the Pi to be an access point AND maintain a Wifi client connection at the same time. This is rather easy to do with two USB wifi adapters — but with the Pi Zero we’ll have to do with a single one! Continue reading “Using your Raspberry Pi Zero’s USB wifi adapter as both Wifi client and access point”

python-netsnmpagent 0.5.1 released

python-netsnmpagent version 0.5.1 has just been released.

This release has no substantial new features but a number of fixes of which the following three are important enough to warrant an update from 0.5.0:

  • netsnmpagent: Make Table’s value() method regard string lengths
  • netsnmpagent: Drop special string handling in Table’s init()/setRowCell()
  • netsnmpagent: Fix Table’s value() cutting off ASN_COUNTER64 table values

Other changes include:

  • Usage of MIB files is now completely optional
  • threading_agent got a small fix so it works on Python 2.6, too
  • __version__ got removed, use pkg_resources in your agent yourself to express version dependencies as outlined in 5715e77f’s commit message.

See the included ChangeLog for a detailed list of all changes

Ways to get the software:

  • As usual, the source is available at the GitHub repo.
  • The source distribution .tar.gz for this release can be downloaded from the PyPI page.
  • You can either build binary RPMs for your local distribution yourself (download and make rpms) or pick them up from my Open Build service project — just click on the Repositories tab and one of the Go to download repository links.

python-netsnmpagent bugfix for trailing dots in table strings

I have just pushed two fixes that should be of interest to all python-netsnmpagent users:

These changes fix the issue with trailing dots in table strings. I’ve so far only pushed them to master. I’d like to get some feedback before pushing these to the 0.5 stable branch, so please check them out :)

Integrating Samba’s DNS server with existing dnsmasq installations

As an Active Directory encompasses not only LDAP and Kerberos but also DNS and there are funny things Microsoft does with DNS (dynamic updates, special SRV records to locate hosts etc.), running Samba as an Active Directory domain controller means running either the built-in DNS server or bind9 with a special DLZ plugin.

dnsmasq integration has been discussed but seems to have been abandoned not so much for technical reasons than rather for lack of real interest on both sides. There is at least this HOWTO that works around the technical issues by teaching dnsmasq the necessary SRV records manually, but even then you won’t have dynamic DNS updates the way Samba needs them and it is more of a hack definitely unsupported by the Samba team than a viable solution.

Running dnsmasq is feasible not so much as an alternative running on the Samba host itself, but, at least in my idea of SOHO networking, it’s pre-destined for embedded devices such as access points and routers and accordingly the default DNS forwarder in OpenWrt. Having DNS resolution depend on a “higher-level” DNS service provided by Samba would contradict that concept. Apart from the fact that Samba’s DNS server would require support for every single feature existing DNS servers (such as dnsmasq) already have — or bind be used, a software I do not really miss particularly much (think zone files).

Obviously I can’t achieve the desired isolation of a basic network service such as DNS and a productive service such as Samba with a single DNS zone, as there is no such thing as zone sharing. So I’ll need two DNS zones: mysite.foo.bar and either ad.mysite.foo.bar or mysite.ad.foo.bar. The latter choice would be preferable if we were to seriously use Active Directory features such as forests and sites but also mean that there would be a “parallel forest” of “conventional” DNS zones and the need to have a foo.bar DNS server that supports delegations. As Samba 4 currently supports running a single Active Directory domain controller only anyway, I’ll go with the former:

DNS zone Managed by Running on
mysite.foo.bar dnsmasq OpenWrt-based access point/router
ad.mysite.foo.bar Samba “Real” server

Now I do, of course, have only one DHCP service at my “site”. Technically it could supply multiple DNS servers but you wouldn’t want that since you can’t control your clients’ resolvers’ behavior via DHCP (ie. when which DNS server is tried). And there’s no need to, because here comes the elegant part: all clients continue to receive the IP address of an OpenWrt device as DNS server which is authoritative for mysite.foo.bar. Requests for *.ad.foo.bar simply get delegated to the Samba host with a dnsmasq configuration such as the following:


# Local dnsmasq instance is responsible for
# mysite.foo.bar
domain=mysite.foo.bar
server=/mysite.foo.bar/

# DNS delegation for ad.mysite.foo.bar
server=/ad.mysite.foo.bar/192.168.0.1

# If rebind protection is on, this is
# required to avoid warnings on DNS
# rebinding attacks
rebind-domain-ok=ad.mysite.foo.bar

# Upstream DNS server, handles everything
# outside ad.mysite.foo.bar and mysite.foo.bar
server=192.168.0.254

Note that having two DNS zones does not imply that you need to have two IP subnets. It’s perfectly fine to have both baz.mysite.foo.bar and baz.ad.mysite.foo.bar point at 192.168.0.1 and have reverse lookup of the IP address resolve to baz.mysite.foo.bar, as long as you configure Kerberos client configuration accordingly (the rdns = false option described at the end of my sssd-ad configuration post).

This way, if the Samba server goes down, only the ad.mysite.foo.bar zone will be affected, not mysite.foo.bar as a whole. Neat :)

My SOHO network layer model

In my eyes, it makes sense to divide the elements that are part of a SOHO (small office/home office) network into one of two layers:

Basic network and productive services
Basic network and productive services

In this model, if I were to speak about “the network” I’d mean what I call the basic network: all components that in their togetherness constitute an independent, foundational layer cornered around connectivity and, by comparison, low complexity (ie. no full-blown operating system on each device). This includes the physical LAN cabling (if present), network switches, print servers (usually integrated into the printers), WLAN access points and routers.

Because nowadays it is often essential for system administrators to have Internet access, be it for googling on problems that pop up or for bootstrapping installations that download software directly from the ‘Net (eg. in disaster recovery scenarios when no local mirror is present any more), I consider DNS and DHCP services to be essential enough to be part of the basic network as well.

With the advent of flash-based embedded devices such as WLAN access points and routers, the availability of OpenWrt as a standardized Linux distribution for these and the low resource consumption of DNS/DHCP, migration of these services from hard disk-based servers onto access points/routers became feasible. After all, an access point running on flash memory is much less likely to fail than a full-blown server with hard disks as storage. The only part I’ve seen failing over years with these devices is the $0.05 power supply.

The basic network is foundational in two ways: for one thing, it is independent, ie. can stand on its own. And the productive services layer, that encompasses more value-creating (to the end user) services such as File, Print and E-Mail services, is stacked upon it. No basic network, no productive services. And at the same time: no productive services, no real value in the basic network.

Formulating such a model helps in making up your own mind and communicating with others, eg. about the question where a service such as NTP should be placed. What do you think?

Configuring sssd’s Active Directory provider

Following up on the previous post, here’s how we get sssd to actually provide access to our Samba-driven Active Directory.

I started with the instructions in the Samba wiki but these actually go beyond the minimum that is necessary. Let me also add some context to the individual components and settings involved.

How sssd’s components work together

sssd is quite modular: if you read the sssd.conf man page, you’ll learn about services and domains. You will also learn about different providers such as the already mentioned Active Directory provider that we are going to use. Do not be fooled however: providers are not mutually-exclusive. For example, our Active Directory provider works together with the LDAP and the Kerberos providers as shown here:

Individual sssd components working together
Individual sssd components working together

As a consequence, we’ll have to consider not only sssd-ad configuration directives but also some of those of sssd-ldap and sssd-krb5. And, because sssd-krb5 uses the Kerberos library we’ll also have to consider /etc/krb5.conf.

Configuration explained

Without further ado, here’s an example for a minimal /etc/sssd.conf that takes advantage of autodiscovery:


[sssd]
config_file_version=2
services=nss, pam
domains=ad.mydomain.foo.bar

[domain/ad.mydomain.foo.bar]
id_provider=ad
access_provider=ad
dyndns_update=false
enumerate=true
ldap_id_mapping=true
krb5_realm=AD.MYDOMAIN.FOO.BAR
krb5_keytab=/etc/sssd/ad.mydomain.foo.bar.keytab

Setting id_provider and access_provider activates sssd-ad as identity provider (ie. the source for user and group information) and access provider (eg. checks if a user is allowed access). However it also activates it as authentication provider (ie. checks passwords) and chpass provider (ie. changes passwords), because id_provider‘s value is a default for auth_provider, which in turn is the default for chpass_provider.

We do not specify ad_domain because the default is to use the configuration section’s name (minus the “domain/” part, of course). We do not specify ad_server either because Samba’s DNS server has automagically set up SRV records for us that sssd-ad/ can use for service discovery.

I disabled dyndns_update for now because it gave me problems. Setting enumerate to true is debatable and recommended for small setups only, but you might want it for playing around with nested groups and see how they work. The default is false.

There’s no need to specify any of ldap_uri, ldap_search_base, ldap_sasl_mech or ldap_sasl_authid, ldap_user_* and ldap_group_*sssd-ad will have taken care of these parameters for you.

ldap_id_mapping is set to true so that sssd itself takes care of mapping Windows SIDs to Unix UIDs. Otherwise the Active Directory must be able to provide POSIX extensions. If yours does, you can omit this option, of course.

It is obligatory to specify krb5_realm, which, by convention, is always upper-case and in most cases will the DNS name of the Active directory. krb5_keytab specifies the keytab file sssd will use to connected to Samba’s KDC (see also: What is a keytab file). The keytab file can be exported on the Samba server as per the Samba Wiki instructions.

Again, krb5_server, krb5_kpasswd will have already been provided by sssd-ad for you.

As you can see the net result is a much simpler configuration than with sssd-ldap and sssd-krb5 alone. Provided you’ve followed the other necessary steps, eg. PAM and NSS configuration (again, see eg. the Samba Wiki instructions), you can now try eg. getent passwd and should be able to see your Active Directory users.

Debugging hints

I had a hard time to get everything working, receiving “GSSAPI Error: Miscellaneous failure (Server not found in Kerberos database)” error messages all the time. Some hints:

  1. Start sssd manually and in debug mode with sssd -i -d 5, choosing the debug level as appropriate.
  2. One can not stress enough: get your DNS working properly. In my case, the DHCP service advertising the DNS servers to use ran on a separate machine, and of course I forgot to specify the IP address of the Samba server there…
  3. The Kerberos client library will do one thing you might not immediately be aware of: reverse DNS lookups. If, for example, you decided to use a separate DNS subdomain for Active Directory (which you definitely do want to do) but let your hostname sambahost.ad.foo.bar point at 1.2.3.4 and 1.2.3.4 in turn resolves to sambahost.foo.bar, things won’t work unless you specify rdns = false in your /etc/krb5.conf.

    By the way, the only way to really debug such issues is either to run Wireshark or to look at Samba’s logfiles — the client itself won’t even tell you it does such things (reverse lookups and their results) if you strangle it.

Update 08.02.2014: It seems that krb5_realm is not strictly necessary. The krb5_ad provider will take care of that for you:

(Sat Feb  8 23:55:13 2014) [sssd[be[ad.mydomain.foo.bar]]] [ad_set_ad_id_options] (0x0100): Option krb5_realm set to AD.MYDOMAIN.FOO.BAR

Making Samba users available locally to Linux systems

In the past, we used to integrate Samba and “native” Linux users by using a single password backend, often LDAP:

User authentication of Linux system and Samba users against LDAP
User authentication of Linux system and Samba users against LDAP

This lead to several moments of pain, from the deficiencies surrounding group membership, the NIS and the rfc2307bis schema (see eg. here and here), over the need to define scripts to execute administrative actions such as adding users, to the not really particularly intuitive LDAP server setup in newer OpenLDAP versions.

Samba has for quite some time offered an alternative solution, by allowing for two separate user databases, the Linux passwd/shadow one and its own, and providing the Linux system with access to its user database through the combination of the winbind daemon and suitable PAM and NSS modules:

User authentication of Linux system and Samba users against LDAP
User authentication of Linux system and Samba users against LDAP

However, as I pointed out before, the Samba4 version of winbind currently lacks some important functionality.

Luckily, this Samba wiki page points out two alternatives (don’t be fooled by this wiki page, which currently only one):

Note that nslcd/pam_ldap/nss_ldap is not PADL’s now-considered dead pam_ldap / nss_ldap but a fork/rewrite. Even this one, however, draws controversy.

Yet, I found it hard to make a decision in the light of rather opinionated information on the Web and rather sparse information on the Samba Wiki. So I tried to come up with a comparison table myself, focussing on integration with Samba 4 and Active Directory features. Note that the table comes with a level of uncertainty, feel free to send me corrections.

sssd / pam_sss / nss_sss (1.11.3) nslcd / pam_ldap / nss_ldap (0.9.2)
Supports unencrypted connections via plain LDAP? Yes Yes
Supports encrypted connections via Kerberos? Yes Yes
SASL required? No Yes
Requires explicit Kerberos ticket renewal eg. through background k5start process? No Yes
Retrieval of POSIX data (UID, GID, home directory, login shell) from a Active Directory provisioned with –rfc-2307 option Optional (if AD provider is used), Required (if pure LDAP provider is used) Required
Separate backends for user/group information and authentication Yes No (but different LDAP sources)
Host must be joined to the domain? No (but advantageous in certain scenarios) No
Supports Site-based discovery? Yes No
Supports Active Directory’s Global Catalog? Yes No
Can resolve Active Directory’s global and universal groups? Yes No
Can resolve group members from trusted domains? Yes No
Leverages Active Directory’s tokenGroups attribute? Yes (also in addition to POSIX attributes) No
Offline authentication possible? Yes Yes?
Latest release 2013 2013

The conclusion: nslcd is a choice for pure LDAP authentication, for Active Directory scenarios one should go with sssd.

Unfortunately, my $distro still shipped with sssd 1.9.5 and you really do want 1.10 or newer because of added features in its Active Directory identity provider. While is it possible to access AD via pure LDAP and with or without Kerberos, you really should use the Active Directory provider because of the reasons given above. See also Jakub Hrozek’s blog and his FreeIPA.org presentation for additional information on the new features.

So I built and packaged sssd (and Samba) myself — and it took me just a month…

Configuration of sssd for Active Directory will be continued in the next post.

Why Puppet should ship with official modules

Much has certainly already been said about Puppetforge. A year ago, we were promised at Puppetcamp Nuremburg that Puppetforge was likely to improve to a more usable level. But as of now, Puppetforge is much like Github: unless you already know where to look, what to take, you’re pretty much left to your own, to trial and error. Puppetforge does not really help in making decisions on which modules to try. Yes, it is a central location to shop for Puppet modules. But it’s more like a big DropBox.

Of course, other software artefacts have the same or worse problems. For example, there is no central location for C/C++ libraries, so you wouldn’t even know where to begin looking safe for hoping to use the right Google keywords. Still, certain projects such as boost enjoy a certain popularity due to adoption by
known software projects or word of mouth. But the difference is: such libraries, enjoy a much different level of attention than any particular Puppet module. I’ll have a hard time promoting my particular implementation of, say, a ntp module, when there are twenty others.

In “appstores”, such as Google’s Play store, there are facilities that can give at least some advice as to which apps are worth trying. There is a download count, but considered alone it does not indicate quality. After all nothing keeps larger groups of people from making poor decisions lemming-style. That’s why customer reviews (hopefully) provide additional insights, although these could be subject of manipulations more or less easily.

Puppetforge has download counters, but it doesn’t have a comment system. The only way to assess whether this or that module could be a better candidate is the download count and, as a special case possibly the author: Puppet modules published by Puppetlabs themselves might be considered official, popular, well-tested.

That, however, upon further proof stands as a mere assumption. And it leads me to a question that may sound stupid but inevitably comes to my mind:

Why is there no official Puppet modules distribution?

Basically, the Puppet download alone is pretty much useless until you augment it with Puppet modules. To do that, I can go to Puppetforge and make the experience described above. I fail to see the sense behind that.

Yes, I can see that, just because Puppet Labs puts their label on a module, it does not make it automatically better. But that’s not the point, the same argument would hold true for modules published on Puppetforge, as well.

Shipping Puppet with a set of included, “default” Puppet modules would instead have a signaling effect. It would make clear on what code developers should focus, where patches for improvement should be made against. It is not so much about having the best solution. It is about stepping forward and filling a void that can seriously hinder Puppet adoption.

HomeOps: A call for the application of Devops principles at home, too

We all have sort of gained some kind of experience in the IT world, in the way IT works and especially the way it doesn’t work. As an IT professional following recent trends and developments (you do continously keep an eye on them, right?), you will certainly have learned about (or at least heard of) DevOps principles, an ever-growing call for a different mindset on development, operations and silos in companies in general.

Fostered by events such as the series of DevOpsDays conferences spreading around the world, the term “DevOps” has reached a state where it is not only widely misunderstood but also abused to a point where you have to be extra sceptical (a mere buzzword for HP, IBM products, an obscure job definition that distracts from the real challenge).

Many of us already live DevOps principles or at least parts of them if they’re lucky enough to work in a sufficiently agile and wary organization, be it for rather common or specific reasons. Now one reason to especially justify the use of automation as but one DevOps ingredient will certainly be that, put casually, “there is no possible way we would have the time to rebuild this setup manually”. The perseverative Cloud trend practically dictates a more rational approach. So far, so good.

But recently my home server died. That is, the machine at home running mail, file and print services. And guess what? There was no possible way that I had time to rebuild this setup manually!

This is a call for what I call “HomeOps” (in the abscence of a more useful name — HomeDevOps? SOHOOps?). With “HomeOps”, I call for the extension of DevOps concepts to our home IT as far as possible.

Think about it: it is centuries ago that IT people had to deal with the management of IT entities at work only. The times where “Sysadmins” referred to “operating that mainframe at the company” and IT activities at home were limited to comparatively simple home computers are long, long gone.

Nowadays, professional IT staff effictively always deals with two, if not three networks at the same time:

  1. The company’s IT infrastructure.
  2. The (hopefully) always available Internet and our smartphones and tablets
  3. One or more computers, tablets, NAS boxes, “smart” devices such as flatscreens with Internet access, Wifi access points and a router supplying the Internet connectivity at home.

For #1, we’ve learned to apply a DevOps mindset, automation tools, continous X concepts etc., as discussed above.

For #2, I’m not talking about a need for maintenance of mobile networks, I’m talking about the implications of using a Smartphone and the Apps on it. Ever tried to backup your Android phone’s data? Luckily, Google can take care of the most important aspects such as phone numbers, calendar data etc. by promoting Cloud storage. If you trust Google, that is.

For #3, we do what? Face it:

  • Your laptop may have come pre-installed and you may use that installation. As an IT professional, you most probably don’t. How often do you reinstall and how much time does it cost?
  • You may be a Mac user and use Apple’s Time Capsule or you may use a Synology/QNAP/Thecus/Whatever NAS device that gives you a fancy GUI and makes the setup real easy. But what’s a backup worth that does not get controlled? Do you currently really actually monitor its hard disks?
  • Your Internet router may come preconfigured by your ISP. Even if it does, how much fun is configuring port forwarding?

These are, of course, just examples and some of them may apply to your home scenario, some not. In my case, for example, a NAS alone would not be enough, I run a NAS-like device but with an ordinary x86 Linux distribution. Which means it is just as much of an instance that needs management as your cloud VM no. 83424, except that you’d manage it manually. But why?

The key question is: why for heaven’s sake should we do things differently at home?

Of course we know the answer: because the efforts necessary do not seem worth the advantages. And this is where I have come to the conclusion to disagree:

  • Yes, it is “just those few devices”. But the number of devices says nothing about their personal significance to your daily life. If you need to access urgent data, eg. to reply to the tax payer’s office, and the filesystem holding it is not available, have fun!
  • Yes, you may have backed up your data somewhere in the Cloud. This does not mean that setting up things at home in a recovery scenario just became a piece of cake.
  • Don’t be fooled by looking at disaster recovery scenarios (eg. failing hard disks) only. There is one thing that you’re guaranteed to do more often and that is software updates. Unless you’re using a long time support Linux distribution, you’ll be just as much a victim to its software lifecycle as in a company. Compare installing Ubuntu 29 and configuring everything by hand to installing Ubuntu 29 and running your favorite config management tool which has just received Ubuntu 29 support by someone who needed it as well.
  • And, to some probably the strongest argument of all: how does “Some initial work now, much less work later on” sound to your wife, your spouse, your kids? Does your wife have understanding why home IT is perceived as being of less stability than eg. the telephone service?
  • Last not least, playing at home with the same tools you use at work certainly won’t hurt in gaining additional experience.

Yes, I probably won’t go as far and set up a continous deployment toolchain at home (although one could even think about that). But I’m currently automating my home server with Puppet and I’ll certainly blog more on my experiences in doing that. As well as the total “HomeOps” concept that slowly begins emerging before my eyes. Clearly with the goal of a home IT that can rise like a phoenix from its ashes.

I’m not saying this whole HomeOps idea is a “wooza brand new concept”. Or “something big”. Or “something different”. I just find it useful to give things a name to talk and discuss about them.