Integrating Samba’s DNS server with existing dnsmasq installations

As an Active Directory encompasses not only LDAP and Kerberos but also DNS and there are funny things Microsoft does with DNS (dynamic updates, special SRV records to locate hosts etc.), running Samba as an Active Directory domain controller means running either the built-in DNS server or bind9 with a special DLZ plugin.

dnsmasq integration has been discussed but seems to have been abandoned not so much for technical reasons than rather for lack of real interest on both sides. There is at least this HOWTO that works around the technical issues by teaching dnsmasq the necessary SRV records manually, but even then you won’t have dynamic DNS updates the way Samba needs them and it is more of a hack definitely unsupported by the Samba team than a viable solution.

Running dnsmasq is feasible not so much as an alternative running on the Samba host itself, but, at least in my idea of SOHO networking, it’s pre-destined for embedded devices such as access points and routers and accordingly the default DNS forwarder in OpenWrt. Having DNS resolution depend on a “higher-level” DNS service provided by Samba would contradict that concept. Apart from the fact that Samba’s DNS server would require support for every single feature existing DNS servers (such as dnsmasq) already have — or bind be used, a software I do not really miss particularly much (think zone files).

Obviously I can’t achieve the desired isolation of a basic network service such as DNS and a productive service such as Samba with a single DNS zone, as there is no such thing as zone sharing. So I’ll need two DNS zones: mysite.foo.bar and either ad.mysite.foo.bar or mysite.ad.foo.bar. The latter choice would be preferable if we were to seriously use Active Directory features such as forests and sites but also mean that there would be a “parallel forest” of “conventional” DNS zones and the need to have a foo.bar DNS server that supports delegations. As Samba 4 currently supports running a single Active Directory domain controller only anyway, I’ll go with the former:

DNS zone Managed by Running on
mysite.foo.bar dnsmasq OpenWrt-based access point/router
ad.mysite.foo.bar Samba “Real” server

Now I do, of course, have only one DHCP service at my “site”. Technically it could supply multiple DNS servers but you wouldn’t want that since you can’t control your clients’ resolvers’ behavior via DHCP (ie. when which DNS server is tried). And there’s no need to, because here comes the elegant part: all clients continue to receive the IP address of an OpenWrt device as DNS server which is authoritative for mysite.foo.bar. Requests for *.ad.foo.bar simply get delegated to the Samba host with a dnsmasq configuration such as the following:


# Local dnsmasq instance is responsible for
# mysite.foo.bar
domain=mysite.foo.bar
server=/mysite.foo.bar/

# DNS delegation for ad.mysite.foo.bar
server=/ad.mysite.foo.bar/192.168.0.1

# If rebind protection is on, this is
# required to avoid warnings on DNS
# rebinding attacks
rebind-domain-ok=ad.mysite.foo.bar

# Upstream DNS server, handles everything
# outside ad.mysite.foo.bar and mysite.foo.bar
server=192.168.0.254

Note that having two DNS zones does not imply that you need to have two IP subnets. It’s perfectly fine to have both baz.mysite.foo.bar and baz.ad.mysite.foo.bar point at 192.168.0.1 and have reverse lookup of the IP address resolve to baz.mysite.foo.bar, as long as you configure Kerberos client configuration accordingly (the rdns = false option described at the end of my sssd-ad configuration post).

This way, if the Samba server goes down, only the ad.mysite.foo.bar zone will be affected, not mysite.foo.bar as a whole. Neat :)

My SOHO network layer model

In my eyes, it makes sense to divide the elements that are part of a SOHO (small office/home office) network into one of two layers:

Basic network and productive services
Basic network and productive services

In this model, if I were to speak about “the network” I’d mean what I call the basic network: all components that in their togetherness constitute an independent, foundational layer cornered around connectivity and, by comparison, low complexity (ie. no full-blown operating system on each device). This includes the physical LAN cabling (if present), network switches, print servers (usually integrated into the printers), WLAN access points and routers.

Because nowadays it is often essential for system administrators to have Internet access, be it for googling on problems that pop up or for bootstrapping installations that download software directly from the ‘Net (eg. in disaster recovery scenarios when no local mirror is present any more), I consider DNS and DHCP services to be essential enough to be part of the basic network as well.

With the advent of flash-based embedded devices such as WLAN access points and routers, the availability of OpenWrt as a standardized Linux distribution for these and the low resource consumption of DNS/DHCP, migration of these services from hard disk-based servers onto access points/routers became feasible. After all, an access point running on flash memory is much less likely to fail than a full-blown server with hard disks as storage. The only part I’ve seen failing over years with these devices is the $0.05 power supply.

The basic network is foundational in two ways: for one thing, it is independent, ie. can stand on its own. And the productive services layer, that encompasses more value-creating (to the end user) services such as File, Print and E-Mail services, is stacked upon it. No basic network, no productive services. And at the same time: no productive services, no real value in the basic network.

Formulating such a model helps in making up your own mind and communicating with others, eg. about the question where a service such as NTP should be placed. What do you think?

Configuring sssd’s Active Directory provider

Following up on the previous post, here’s how we get sssd to actually provide access to our Samba-driven Active Directory.

I started with the instructions in the Samba wiki but these actually go beyond the minimum that is necessary. Let me also add some context to the individual components and settings involved.

How sssd’s components work together

sssd is quite modular: if you read the sssd.conf man page, you’ll learn about services and domains. You will also learn about different providers such as the already mentioned Active Directory provider that we are going to use. Do not be fooled however: providers are not mutually-exclusive. For example, our Active Directory provider works together with the LDAP and the Kerberos providers as shown here:

Individual sssd components working together
Individual sssd components working together

As a consequence, we’ll have to consider not only sssd-ad configuration directives but also some of those of sssd-ldap and sssd-krb5. And, because sssd-krb5 uses the Kerberos library we’ll also have to consider /etc/krb5.conf.

Configuration explained

Without further ado, here’s an example for a minimal /etc/sssd.conf that takes advantage of autodiscovery:


[sssd]
config_file_version=2
services=nss, pam
domains=ad.mydomain.foo.bar

[domain/ad.mydomain.foo.bar]
id_provider=ad
access_provider=ad
dyndns_update=false
enumerate=true
ldap_id_mapping=true
krb5_realm=AD.MYDOMAIN.FOO.BAR
krb5_keytab=/etc/sssd/ad.mydomain.foo.bar.keytab

Setting id_provider and access_provider activates sssd-ad as identity provider (ie. the source for user and group information) and access provider (eg. checks if a user is allowed access). However it also activates it as authentication provider (ie. checks passwords) and chpass provider (ie. changes passwords), because id_provider‘s value is a default for auth_provider, which in turn is the default for chpass_provider.

We do not specify ad_domain because the default is to use the configuration section’s name (minus the “domain/” part, of course). We do not specify ad_server either because Samba’s DNS server has automagically set up SRV records for us that sssd-ad/ can use for service discovery.

I disabled dyndns_update for now because it gave me problems. Setting enumerate to true is debatable and recommended for small setups only, but you might want it for playing around with nested groups and see how they work. The default is false.

There’s no need to specify any of ldap_uri, ldap_search_base, ldap_sasl_mech or ldap_sasl_authid, ldap_user_* and ldap_group_*sssd-ad will have taken care of these parameters for you.

ldap_id_mapping is set to true so that sssd itself takes care of mapping Windows SIDs to Unix UIDs. Otherwise the Active Directory must be able to provide POSIX extensions. If yours does, you can omit this option, of course.

It is obligatory to specify krb5_realm, which, by convention, is always upper-case and in most cases will the DNS name of the Active directory. krb5_keytab specifies the keytab file sssd will use to connected to Samba’s KDC (see also: What is a keytab file). The keytab file can be exported on the Samba server as per the Samba Wiki instructions.

Again, krb5_server, krb5_kpasswd will have already been provided by sssd-ad for you.

As you can see the net result is a much simpler configuration than with sssd-ldap and sssd-krb5 alone. Provided you’ve followed the other necessary steps, eg. PAM and NSS configuration (again, see eg. the Samba Wiki instructions), you can now try eg. getent passwd and should be able to see your Active Directory users.

Debugging hints

I had a hard time to get everything working, receiving “GSSAPI Error: Miscellaneous failure (Server not found in Kerberos database)” error messages all the time. Some hints:

  1. Start sssd manually and in debug mode with sssd -i -d 5, choosing the debug level as appropriate.
  2. One can not stress enough: get your DNS working properly. In my case, the DHCP service advertising the DNS servers to use ran on a separate machine, and of course I forgot to specify the IP address of the Samba server there…
  3. The Kerberos client library will do one thing you might not immediately be aware of: reverse DNS lookups. If, for example, you decided to use a separate DNS subdomain for Active Directory (which you definitely do want to do) but let your hostname sambahost.ad.foo.bar point at 1.2.3.4 and 1.2.3.4 in turn resolves to sambahost.foo.bar, things won’t work unless you specify rdns = false in your /etc/krb5.conf.

    By the way, the only way to really debug such issues is either to run Wireshark or to look at Samba’s logfiles — the client itself won’t even tell you it does such things (reverse lookups and their results) if you strangle it.

Update 08.02.2014: It seems that krb5_realm is not strictly necessary. The krb5_ad provider will take care of that for you:

(Sat Feb  8 23:55:13 2014) [sssd[be[ad.mydomain.foo.bar]]] [ad_set_ad_id_options] (0x0100): Option krb5_realm set to AD.MYDOMAIN.FOO.BAR

Making Samba users available locally to Linux systems

In the past, we used to integrate Samba and “native” Linux users by using a single password backend, often LDAP:

User authentication of Linux system and Samba users against LDAP
User authentication of Linux system and Samba users against LDAP

This lead to several moments of pain, from the deficiencies surrounding group membership, the NIS and the rfc2307bis schema (see eg. here and here), over the need to define scripts to execute administrative actions such as adding users, to the not really particularly intuitive LDAP server setup in newer OpenLDAP versions.

Samba has for quite some time offered an alternative solution, by allowing for two separate user databases, the Linux passwd/shadow one and its own, and providing the Linux system with access to its user database through the combination of the winbind daemon and suitable PAM and NSS modules:

User authentication of Linux system and Samba users against LDAP
User authentication of Linux system and Samba users against LDAP

However, as I pointed out before, the Samba4 version of winbind currently lacks some important functionality.

Luckily, this Samba wiki page points out two alternatives (don’t be fooled by this wiki page, which currently only one):

Note that nslcd/pam_ldap/nss_ldap is not PADL’s now-considered dead pam_ldap / nss_ldap but a fork/rewrite. Even this one, however, draws controversy.

Yet, I found it hard to make a decision in the light of rather opinionated information on the Web and rather sparse information on the Samba Wiki. So I tried to come up with a comparison table myself, focussing on integration with Samba 4 and Active Directory features. Note that the table comes with a level of uncertainty, feel free to send me corrections.

sssd / pam_sss / nss_sss (1.11.3) nslcd / pam_ldap / nss_ldap (0.9.2)
Supports unencrypted connections via plain LDAP? Yes Yes
Supports encrypted connections via Kerberos? Yes Yes
SASL required? No Yes
Requires explicit Kerberos ticket renewal eg. through background k5start process? No Yes
Retrieval of POSIX data (UID, GID, home directory, login shell) from a Active Directory provisioned with –rfc-2307 option Optional (if AD provider is used), Required (if pure LDAP provider is used) Required
Separate backends for user/group information and authentication Yes No (but different LDAP sources)
Host must be joined to the domain? No (but advantageous in certain scenarios) No
Supports Site-based discovery? Yes No
Supports Active Directory’s Global Catalog? Yes No
Can resolve Active Directory’s global and universal groups? Yes No
Can resolve group members from trusted domains? Yes No
Leverages Active Directory’s tokenGroups attribute? Yes (also in addition to POSIX attributes) No
Offline authentication possible? Yes Yes?
Latest release 2013 2013

The conclusion: nslcd is a choice for pure LDAP authentication, for Active Directory scenarios one should go with sssd.

Unfortunately, my $distro still shipped with sssd 1.9.5 and you really do want 1.10 or newer because of added features in its Active Directory identity provider. While is it possible to access AD via pure LDAP and with or without Kerberos, you really should use the Active Directory provider because of the reasons given above. See also Jakub Hrozek’s blog and his FreeIPA.org presentation for additional information on the new features.

So I built and packaged sssd (and Samba) myself — and it took me just a month…

Configuration of sssd for Active Directory will be continued in the next post.

Why Puppet should ship with official modules

Much has certainly already been said about Puppetforge. A year ago, we were promised at Puppetcamp Nuremburg that Puppetforge was likely to improve to a more usable level. But as of now, Puppetforge is much like Github: unless you already know where to look, what to take, you’re pretty much left to your own, to trial and error. Puppetforge does not really help in making decisions on which modules to try. Yes, it is a central location to shop for Puppet modules. But it’s more like a big DropBox.

Of course, other software artefacts have the same or worse problems. For example, there is no central location for C/C++ libraries, so you wouldn’t even know where to begin looking safe for hoping to use the right Google keywords. Still, certain projects such as boost enjoy a certain popularity due to adoption by
known software projects or word of mouth. But the difference is: such libraries, enjoy a much different level of attention than any particular Puppet module. I’ll have a hard time promoting my particular implementation of, say, a ntp module, when there are twenty others.

In “appstores”, such as Google’s Play store, there are facilities that can give at least some advice as to which apps are worth trying. There is a download count, but considered alone it does not indicate quality. After all nothing keeps larger groups of people from making poor decisions lemming-style. That’s why customer reviews (hopefully) provide additional insights, although these could be subject of manipulations more or less easily.

Puppetforge has download counters, but it doesn’t have a comment system. The only way to assess whether this or that module could be a better candidate is the download count and, as a special case possibly the author: Puppet modules published by Puppetlabs themselves might be considered official, popular, well-tested.

That, however, upon further proof stands as a mere assumption. And it leads me to a question that may sound stupid but inevitably comes to my mind:

Why is there no official Puppet modules distribution?

Basically, the Puppet download alone is pretty much useless until you augment it with Puppet modules. To do that, I can go to Puppetforge and make the experience described above. I fail to see the sense behind that.

Yes, I can see that, just because Puppet Labs puts their label on a module, it does not make it automatically better. But that’s not the point, the same argument would hold true for modules published on Puppetforge, as well.

Shipping Puppet with a set of included, “default” Puppet modules would instead have a signaling effect. It would make clear on what code developers should focus, where patches for improvement should be made against. It is not so much about having the best solution. It is about stepping forward and filling a void that can seriously hinder Puppet adoption.

You’ve always been a DevOps at home – sort of

In a previous post, I was making a call for HomeOps, the application of DevOps principles to SOHO (small and home office) scenarios as well. I’ve listed a number of arguments. Here’s another one.

You have been practising sort of a DevOps approach at home already since the very beginning.

Or to be more precise: while you may not have been exercising the professional tools and methods we have to come to associate with a DevOps mindset, you will have nevertheless applied the gap-bridging mindset.

This claim includes three aspects:

  1. that it takes a Dev
  2. that it takes an Op
  3. that you’ve fulfilled both of these roles simultaneously

Let’s begin with the second point. Trivially what you’re doing at home is fulfilling the role of an Op. You might not have a dedicated monitoring system in place that sends you manager on your pager but you’ll actually have a much more advanced monitoring instance that will ring on your mobile the very moment your file service goes down: your family. (Or, if we consider the SO in SOHO, your colleagues.) If you’re reading this article, you’re likely to be the one in charge to get that file storage back up running. If not for them, then out of your own interest. If that’s not Ops, then what? Yes, you do not have the professional tools known from your employer, but as I set, I’m talking about the mindset.

And you’ve been a Dev. Still enjoying the traps of a distribution upgrade? I mentioned the example of Berkeley DB updates that required converting of on-disk databases, of course _after_ installing the new OS release. Ancient times, you say? Think about updating from Apache 2.2 to Apache 2.4.

These are examples that all require an intervention. Of course, in the event of n=1 servers, the ops guy in you might be tempted to do the necessary steps manually, in lieu of a “fire and forget” perspective. But then again, why would you have to do the same steps as thousands of other IT professionals? After all, the update pain is roughly the same. So why not let ONE of us, or more exactly the dev guy in one of us, do the work in a manner that can be reused by us all for collective advantage?

The last point is rather trivial, too. I don’t know if you’ve delegated IT responsibility to members of your family or your office, but in all SOHO scenarios I’ve met there is exactly one guy proficient enough to handle all IT business. And that guy does not exactly divide himself between the dev half and the op half. The dev in him does not meet his goals if the dev in him does not meet his ones. For the very reason that their goals are, in the end, pretty much identical.

Actually, when I matured from my a young boy’s perspective influenced by home computing to understanding the way business IT is structured, I’ve never really understood the divide between devs and ops that seems to dominate the corporate world. But then again, that must be due to someone who thought it would be benificial to divide responsibility.

HomeOps: A call for the application of Devops principles at home, too

We all have sort of gained some kind of experience in the IT world, in the way IT works and especially the way it doesn’t work. As an IT professional following recent trends and developments (you do continously keep an eye on them, right?), you will certainly have learned about (or at least heard of) DevOps principles, an ever-growing call for a different mindset on development, operations and silos in companies in general.

Fostered by events such as the series of DevOpsDays conferences spreading around the world, the term “DevOps” has reached a state where it is not only widely misunderstood but also abused to a point where you have to be extra sceptical (a mere buzzword for HP, IBM products, an obscure job definition that distracts from the real challenge).

Many of us already live DevOps principles or at least parts of them if they’re lucky enough to work in a sufficiently agile and wary organization, be it for rather common or specific reasons. Now one reason to especially justify the use of automation as but one DevOps ingredient will certainly be that, put casually, “there is no possible way we would have the time to rebuild this setup manually”. The perseverative Cloud trend practically dictates a more rational approach. So far, so good.

But recently my home server died. That is, the machine at home running mail, file and print services. And guess what? There was no possible way that I had time to rebuild this setup manually!

This is a call for what I call “HomeOps” (in the abscence of a more useful name — HomeDevOps? SOHOOps?). With “HomeOps”, I call for the extension of DevOps concepts to our home IT as far as possible.

Think about it: it is centuries ago that IT people had to deal with the management of IT entities at work only. The times where “Sysadmins” referred to “operating that mainframe at the company” and IT activities at home were limited to comparatively simple home computers are long, long gone.

Nowadays, professional IT staff effictively always deals with two, if not three networks at the same time:

  1. The company’s IT infrastructure.
  2. The (hopefully) always available Internet and our smartphones and tablets
  3. One or more computers, tablets, NAS boxes, “smart” devices such as flatscreens with Internet access, Wifi access points and a router supplying the Internet connectivity at home.

For #1, we’ve learned to apply a DevOps mindset, automation tools, continous X concepts etc., as discussed above.

For #2, I’m not talking about a need for maintenance of mobile networks, I’m talking about the implications of using a Smartphone and the Apps on it. Ever tried to backup your Android phone’s data? Luckily, Google can take care of the most important aspects such as phone numbers, calendar data etc. by promoting Cloud storage. If you trust Google, that is.

For #3, we do what? Face it:

  • Your laptop may have come pre-installed and you may use that installation. As an IT professional, you most probably don’t. How often do you reinstall and how much time does it cost?
  • You may be a Mac user and use Apple’s Time Capsule or you may use a Synology/QNAP/Thecus/Whatever NAS device that gives you a fancy GUI and makes the setup real easy. But what’s a backup worth that does not get controlled? Do you currently really actually monitor its hard disks?
  • Your Internet router may come preconfigured by your ISP. Even if it does, how much fun is configuring port forwarding?

These are, of course, just examples and some of them may apply to your home scenario, some not. In my case, for example, a NAS alone would not be enough, I run a NAS-like device but with an ordinary x86 Linux distribution. Which means it is just as much of an instance that needs management as your cloud VM no. 83424, except that you’d manage it manually. But why?

The key question is: why for heaven’s sake should we do things differently at home?

Of course we know the answer: because the efforts necessary do not seem worth the advantages. And this is where I have come to the conclusion to disagree:

  • Yes, it is “just those few devices”. But the number of devices says nothing about their personal significance to your daily life. If you need to access urgent data, eg. to reply to the tax payer’s office, and the filesystem holding it is not available, have fun!
  • Yes, you may have backed up your data somewhere in the Cloud. This does not mean that setting up things at home in a recovery scenario just became a piece of cake.
  • Don’t be fooled by looking at disaster recovery scenarios (eg. failing hard disks) only. There is one thing that you’re guaranteed to do more often and that is software updates. Unless you’re using a long time support Linux distribution, you’ll be just as much a victim to its software lifecycle as in a company. Compare installing Ubuntu 29 and configuring everything by hand to installing Ubuntu 29 and running your favorite config management tool which has just received Ubuntu 29 support by someone who needed it as well.
  • And, to some probably the strongest argument of all: how does “Some initial work now, much less work later on” sound to your wife, your spouse, your kids? Does your wife have understanding why home IT is perceived as being of less stability than eg. the telephone service?
  • Last not least, playing at home with the same tools you use at work certainly won’t hurt in gaining additional experience.

Yes, I probably won’t go as far and set up a continous deployment toolchain at home (although one could even think about that). But I’m currently automating my home server with Puppet and I’ll certainly blog more on my experiences in doing that. As well as the total “HomeOps” concept that slowly begins emerging before my eyes. Clearly with the goal of a home IT that can rise like a phoenix from its ashes.

I’m not saying this whole HomeOps idea is a “wooza brand new concept”. Or “something big”. Or “something different”. I just find it useful to give things a name to talk and discuss about them.

Limited winbind usability with Samba 4

Almost exactly a year ago the first official Samba 4 release saw the light of the world, bringing with it Active Directory Domain Controller support as one of its biggest merits. All relevant Windows APIs had been implemented, thus allowing for all user management to be done through Windows tools such as the “Active Directory Users and Computers” MMC console.

This does of course wake the appetite of moving all users into the AD and let the Linux system authenticate against it as well, a scenario that has been supported through the use of Samba’s winbind for some time now.

As the new “samba” master binary coordinates the other daemons itself, there is no need to start winbindd manually any more. Editing /etc/nsswitch.conf as follows:


passwd: compat winbind
group: compat winbind

makes AD user accounts become visible to the system:


# getent passwd
[...]
vscan:x:65:487:Vscan account:/var/spool/amavis:/bin/false
fetchmail:x:486:2:mail retrieval daemon:/var/lib/fetchmail:/bin/false
BS3\Administrator:*:0:100::/home/%U:/bin/bash
BS3\Guest:*:3000011:3000012::/home/%U:/bin/bash

Note how this output shows two things:

  • “winbind use default domain = yes” does not work: user names are returned including the Samba domain name.
  • Setting “template homedir” does not work: in the example above, it was set to /home/%U, of course, but the “%U” placeholder does not get replaced. Strangely, even if you configure the default values, /home/%D/%U, this won’t work. Comment out the option completely and that very default will work.

Unfortunately, this effectively makes Samba 4 (tested with version 4.1.2 to be precisely) currently quite unusable for the intended purpose.

The first issue has already been reported as Bugzilla #9780. For the second issue there are at least two tickets, Bugzilla #9839 and Bugzilla #9898. According to a comment in the former, the winbindd used in Samba 4 misses support for these placeholders and requires replacement by a combined (Samba 3/Samba 4) winbindd implementation. I do not know of any roadmap for that.

Hello Intel, thanks for shutting down your mainboard business

Three weeks ago I dared to flash the BIOS of my home server’s Intel DQ77KB mainboard to address a number of smaller issues related to BIOS settings not being applied. Apart from the fact that anno 2013 Intel still requires you to create a bootable USB stick, a process still complicated enough it makes you wonder how companies seriously expect users to be able to master it, where other companies such as Asus have been including the flash utility as part of the BIOS itself for a long time: the flash utility indicated success but afterwards the board was dead as a brick. POST beep codes indicated memory trouble but neither changing memory modules nor clearing CMOS and trying the BIOS recovery facilities helped — the board was a case for technical service. Tough luck to have no mailsystem any more, especially if you have dental surgery scheduled the next day.

But here comes the real amusing part: despite Intel’s announcement to only slowly exit desktop mainboard production, the DQ77KB was no longer available anywhere in Germany. Intel silently shut down everything so quick that a product of the current lineup faces supply problems as confirmed in Intel’s forums and by commercial sourcers. In my eyes a shame for a leading technology company.

In my case I was forced to switch to the current Haswell generation because the market offers no other Thin Mini-ITX mainboards with 19V power supply and Q77 chipset. I went for the Asus Q87T mainboard, a decision I did not regret. I can only recommend this board — finally we begin to see some real advantages of an UEFI BIOS.

Driven by technology, haunted by perfectionism