Configuring sssd’s Active Directory provider

Following up on the previous post, here’s how we get sssd to actually provide access to our Samba-driven Active Directory.

I started with the instructions in the Samba wiki but these actually go beyond the minimum that is necessary. Let me also add some context to the individual components and settings involved.

How sssd’s components work together

sssd is quite modular: if you read the sssd.conf man page, you’ll learn about services and domains. You will also learn about different providers such as the already mentioned Active Directory provider that we are going to use. Do not be fooled however: providers are not mutually-exclusive. For example, our Active Directory provider works together with the LDAP and the Kerberos providers as shown here:

Individual sssd components working together

Individual sssd components working together

As a consequence, we’ll have to consider not only sssd-ad configuration directives but also some of those of sssd-ldap and sssd-krb5. And, because sssd-krb5 uses the Kerberos library we’ll also have to consider /etc/krb5.conf.

Configuration explained

Without further ado, here’s an example for a minimal /etc/sssd.conf that takes advantage of autodiscovery:

services=nss, pam


Setting id_provider and access_provider activates sssd-ad as identity provider (ie. the source for user and group information) and access provider (eg. checks if a user is allowed access). However it also activates it as authentication provider (ie. checks passwords) and chpass provider (ie. changes passwords), because id_provider‘s value is a default for auth_provider, which in turn is the default for chpass_provider.

We do not specify ad_domain because the default is to use the configuration section’s name (minus the “domain/” part, of course). We do not specify ad_server either because Samba’s DNS server has automagically set up SRV records for us that sssd-ad/ can use for service discovery.

I disabled dyndns_update for now because it gave me problems. Setting enumerate to true is debatable and recommended for small setups only, but you might want it for playing around with nested groups and see how they work. The default is false.

There’s no need to specify any of ldap_uri, ldap_search_base, ldap_sasl_mech or ldap_sasl_authid, ldap_user_* and ldap_group_*sssd-ad will have taken care of these parameters for you.

ldap_id_mapping is set to true so that sssd itself takes care of mapping Windows SIDs to Unix UIDs. Otherwise the Active Directory must be able to provide POSIX extensions. If yours does, you can omit this option, of course.

It is obligatory to specify krb5_realm, which, by convention, is always upper-case and in most cases will the DNS name of the Active directory. krb5_keytab specifies the keytab file sssd will use to connected to Samba’s KDC (see also: What is a keytab file). The keytab file can be exported on the Samba server as per the Samba Wiki instructions.

Again, krb5_server, krb5_kpasswd will have already been provided by sssd-ad for you.

As you can see the net result is a much simpler configuration than with sssd-ldap and sssd-krb5 alone. Provided you’ve followed the other necessary steps, eg. PAM and NSS configuration (again, see eg. the Samba Wiki instructions), you can now try eg. getent passwd and should be able to see your Active Directory users.

Debugging hints

I had a hard time to get everything working, receiving “GSSAPI Error: Miscellaneous failure (Server not found in Kerberos database)” error messages all the time. Some hints:

  1. Start sssd manually and in debug mode with sssd -i -d 5, choosing the debug level as appropriate.
  2. One can not stress enough: get your DNS working properly. In my case, the DHCP service advertising the DNS servers to use ran on a separate machine, and of course I forgot to specify the IP address of the Samba server there…
  3. The Kerberos client library will do one thing you might not immediately be aware of: reverse DNS lookups. If, for example, you decided to use a separate DNS subdomain for Active Directory (which you definitely do want to do) but let your hostname point at and in turn resolves to, things won’t work unless you specify rdns = false in your /etc/krb5.conf.

    By the way, the only way to really debug such issues is either to run Wireshark or to look at Samba’s logfiles — the client itself won’t even tell you it does such things (reverse lookups and their results) if you strangle it.

Update 08.02.2014: It seems that krb5_realm is not strictly necessary. The krb5_ad provider will take care of that for you:

(Sat Feb  8 23:55:13 2014) [sssd[be[]]] [ad_set_ad_id_options] (0x0100): Option krb5_realm set to AD.MYDOMAIN.FOO.BAR

Making Samba users available locally to Linux systems

In the past, we used to integrate Samba and “native” Linux users by using a single password backend, often LDAP:

User authentication of Linux system and Samba users against LDAP

User authentication of Linux system and Samba users against LDAP

This lead to several moments of pain, from the deficiencies surrounding group membership, the NIS and the rfc2307bis schema (see eg. here and here), over the need to define scripts to execute administrative actions such as adding users, to the not really particularly intuitive LDAP server setup in newer OpenLDAP versions.

Samba has for quite some time offered an alternative solution, by allowing for two separate user databases, the Linux passwd/shadow one and its own, and providing the Linux system with access to its user database through the combination of the winbind daemon and suitable PAM and NSS modules:

User authentication of Linux system and Samba users against LDAP

User authentication of Linux system and Samba users against LDAP

However, as I pointed out before, the Samba4 version of winbind currently lacks some important functionality.

Luckily, this Samba wiki page points out two alternatives (don’t be fooled by this wiki page, which currently only one):

Note that nslcd/pam_ldap/nss_ldap is not PADL’s now-considered dead pam_ldap / nss_ldap but a fork/rewrite. Even this one, however, draws controversy.

Yet, I found it hard to make a decision in the light of rather opinionated information on the Web and rather sparse information on the Samba Wiki. So I tried to come up with a comparison table myself, focussing on integration with Samba 4 and Active Directory features. Note that the table comes with a level of uncertainty, feel free to send me corrections.

sssd / pam_sss / nss_sss (1.11.3) nslcd / pam_ldap / nss_ldap (0.9.2)
Supports unencrypted connections via plain LDAP? Yes Yes
Supports encrypted connections via Kerberos? Yes Yes
SASL required? No Yes
Requires explicit Kerberos ticket renewal eg. through background k5start process? No Yes
Retrieval of POSIX data (UID, GID, home directory, login shell) from a Active Directory provisioned with –rfc-2307 option Optional (if AD provider is used), Required (if pure LDAP provider is used) Required
Separate backends for user/group information and authentication Yes No (but different LDAP sources)
Host must be joined to the domain? No (but advantageous in certain scenarios) No
Supports Site-based discovery? Yes No
Supports Active Directory’s Global Catalog? Yes No
Can resolve Active Directory’s global and universal groups? Yes No
Can resolve group members from trusted domains? Yes No
Leverages Active Directory’s tokenGroups attribute? Yes (also in addition to POSIX attributes) No
Offline authentication possible? Yes Yes?
Latest release 2013 2013

The conclusion: nslcd is a choice for pure LDAP authentication, for Active Directory scenarios one should go with sssd.

Unfortunately, my $distro still shipped with sssd 1.9.5 and you really do want 1.10 or newer because of added features in its Active Directory identity provider. While is it possible to access AD via pure LDAP and with or without Kerberos, you really should use the Active Directory provider because of the reasons given above. See also Jakub Hrozek’s blog and his presentation for additional information on the new features.

So I built and packaged sssd (and Samba) myself — and it took me just a month…

Configuration of sssd for Active Directory will be continued in the next post.

Why Puppet should ship with official modules

Much has certainly already been said about Puppetforge. A year ago, we were promised at Puppetcamp Nuremburg that Puppetforge was likely to improve to a more usable level. But as of now, Puppetforge is much like Github: unless you already know where to look, what to take, you’re pretty much left to your own, to trial and error. Puppetforge does not really help in making decisions on which modules to try. Yes, it is a central location to shop for Puppet modules. But it’s more like a big DropBox.

Of course, other software artefacts have the same or worse problems. For example, there is no central location for C/C++ libraries, so you wouldn’t even know where to begin looking safe for hoping to use the right Google keywords. Still, certain projects such as boost enjoy a certain popularity due to adoption by
known software projects or word of mouth. But the difference is: such libraries, enjoy a much different level of attention than any particular Puppet module. I’ll have a hard time promoting my particular implementation of, say, a ntp module, when there are twenty others.

In “appstores”, such as Google’s Play store, there are facilities that can give at least some advice as to which apps are worth trying. There is a download count, but considered alone it does not indicate quality. After all nothing keeps larger groups of people from making poor decisions lemming-style. That’s why customer reviews (hopefully) provide additional insights, although these could be subject of manipulations more or less easily.

Puppetforge has download counters, but it doesn’t have a comment system. The only way to assess whether this or that module could be a better candidate is the download count and, as a special case possibly the author: Puppet modules published by Puppetlabs themselves might be considered official, popular, well-tested.

That, however, upon further proof stands as a mere assumption. And it leads me to a question that may sound stupid but inevitably comes to my mind:

Why is there no official Puppet modules distribution?

Basically, the Puppet download alone is pretty much useless until you augment it with Puppet modules. To do that, I can go to Puppetforge and make the experience described above. I fail to see the sense behind that.

Yes, I can see that, just because Puppet Labs puts their label on a module, it does not make it automatically better. But that’s not the point, the same argument would hold true for modules published on Puppetforge, as well.

Shipping Puppet with a set of included, “default” Puppet modules would instead have a signaling effect. It would make clear on what code developers should focus, where patches for improvement should be made against. It is not so much about having the best solution. It is about stepping forward and filling a void that can seriously hinder Puppet adoption.

You’ve always been a DevOps at home – sort of

In a previous post, I was making a call for HomeOps, the application of DevOps principles to SOHO (small and home office) scenarios as well. I’ve listed a number of arguments. Here’s another one.

You have been practising sort of a DevOps approach at home already since the very beginning.

Or to be more precise: while you may not have been exercising the professional tools and methods we have to come to associate with a DevOps mindset, you will have nevertheless applied the gap-bridging mindset.

This claim includes three aspects:

  1. that it takes a Dev
  2. that it takes an Op
  3. that you’ve fulfilled both of these roles simultaneously

Let’s begin with the second point. Trivially what you’re doing at home is fulfilling the role of an Op. You might not have a dedicated monitoring system in place that sends you manager on your pager but you’ll actually have a much more advanced monitoring instance that will ring on your mobile the very moment your file service goes down: your family. (Or, if we consider the SO in SOHO, your colleagues.) If you’re reading this article, you’re likely to be the one in charge to get that file storage back up running. If not for them, then out of your own interest. If that’s not Ops, then what? Yes, you do not have the professional tools known from your employer, but as I set, I’m talking about the mindset.

And you’ve been a Dev. Still enjoying the traps of a distribution upgrade? I mentioned the example of Berkeley DB updates that required converting of on-disk databases, of course _after_ installing the new OS release. Ancient times, you say? Think about updating from Apache 2.2 to Apache 2.4.

These are examples that all require an intervention. Of course, in the event of n=1 servers, the ops guy in you might be tempted to do the necessary steps manually, in lieu of a “fire and forget” perspective. But then again, why would you have to do the same steps as thousands of other IT professionals? After all, the update pain is roughly the same. So why not let ONE of us, or more exactly the dev guy in one of us, do the work in a manner that can be reused by us all for collective advantage?

The last point is rather trivial, too. I don’t know if you’ve delegated IT responsibility to members of your family or your office, but in all SOHO scenarios I’ve met there is exactly one guy proficient enough to handle all IT business. And that guy does not exactly divide himself between the dev half and the op half. The dev in him does not meet his goals if the dev in him does not meet his ones. For the very reason that their goals are, in the end, pretty much identical.

Actually, when I matured from my a young boy’s perspective influenced by home computing to understanding the way business IT is structured, I’ve never really understood the divide between devs and ops that seems to dominate the corporate world. But then again, that must be due to someone who thought it would be benificial to divide responsibility.

HomeOps: A call for the application of Devops principles at home, too

We all have sort of gained some kind of experience in the IT world, in the way IT works and especially the way it doesn’t work. As an IT professional following recent trends and developments (you do continously keep an eye on them, right?), you will certainly have learned about (or at least heard of) DevOps principles, an ever-growing call for a different mindset on development, operations and silos in companies in general.

Fostered by events such as the series of DevOpsDays conferences spreading around the world, the term “DevOps” has reached a state where it is not only widely misunderstood but also abused to a point where you have to be extra sceptical (a mere buzzword for HP, IBM products, an obscure job definition that distracts from the real challenge).

Many of us already live DevOps principles or at least parts of them if they’re lucky enough to work in a sufficiently agile and wary organization, be it for rather common or specific reasons. Now one reason to especially justify the use of automation as but one DevOps ingredient will certainly be that, put casually, “there is no possible way we would have the time to rebuild this setup manually”. The perseverative Cloud trend practically dictates a more rational approach. So far, so good.

But recently my home server died. That is, the machine at home running mail, file and print services. And guess what? There was no possible way that I had time to rebuild this setup manually!

This is a call for what I call “HomeOps” (in the abscence of a more useful name — HomeDevOps? SOHOOps?). With “HomeOps”, I call for the extension of DevOps concepts to our home IT as far as possible.

Think about it: it is centuries ago that IT people had to deal with the management of IT entities at work only. The times where “Sysadmins” referred to “operating that mainframe at the company” and IT activities at home were limited to comparatively simple home computers are long, long gone.

Nowadays, professional IT staff effictively always deals with two, if not three networks at the same time:

  1. The company’s IT infrastructure.
  2. The (hopefully) always available Internet and our smartphones and tablets
  3. One or more computers, tablets, NAS boxes, “smart” devices such as flatscreens with Internet access, Wifi access points and a router supplying the Internet connectivity at home.

For #1, we’ve learned to apply a DevOps mindset, automation tools, continous X concepts etc., as discussed above.

For #2, I’m not talking about a need for maintenance of mobile networks, I’m talking about the implications of using a Smartphone and the Apps on it. Ever tried to backup your Android phone’s data? Luckily, Google can take care of the most important aspects such as phone numbers, calendar data etc. by promoting Cloud storage. If you trust Google, that is.

For #3, we do what? Face it:

  • Your laptop may have come pre-installed and you may use that installation. As an IT professional, you most probably don’t. How often do you reinstall and how much time does it cost?
  • You may be a Mac user and use Apple’s Time Capsule or you may use a Synology/QNAP/Thecus/Whatever NAS device that gives you a fancy GUI and makes the setup real easy. But what’s a backup worth that does not get controlled? Do you currently really actually monitor its hard disks?
  • Your Internet router may come preconfigured by your ISP. Even if it does, how much fun is configuring port forwarding?

These are, of course, just examples and some of them may apply to your home scenario, some not. In my case, for example, a NAS alone would not be enough, I run a NAS-like device but with an ordinary x86 Linux distribution. Which means it is just as much of an instance that needs management as your cloud VM no. 83424, except that you’d manage it manually. But why?

The key question is: why for heaven’s sake should we do things differently at home?

Of course we know the answer: because the efforts necessary do not seem worth the advantages. And this is where I have come to the conclusion to disagree:

  • Yes, it is “just those few devices”. But the number of devices says nothing about their personal significance to your daily life. If you need to access urgent data, eg. to reply to the tax payer’s office, and the filesystem holding it is not available, have fun!
  • Yes, you may have backed up your data somewhere in the Cloud. This does not mean that setting up things at home in a recovery scenario just became a piece of cake.
  • Don’t be fooled by looking at disaster recovery scenarios (eg. failing hard disks) only. There is one thing that you’re guaranteed to do more often and that is software updates. Unless you’re using a long time support Linux distribution, you’ll be just as much a victim to its software lifecycle as in a company. Compare installing Ubuntu 29 and configuring everything by hand to installing Ubuntu 29 and running your favorite config management tool which has just received Ubuntu 29 support by someone who needed it as well.
  • And, to some probably the strongest argument of all: how does “Some initial work now, much less work later on” sound to your wife, your spouse, your kids? Does your wife have understanding why home IT is perceived as being of less stability than eg. the telephone service?
  • Last not least, playing at home with the same tools you use at work certainly won’t hurt in gaining additional experience.

Yes, I probably won’t go as far and set up a continous deployment toolchain at home (although one could even think about that). But I’m currently automating my home server with Puppet and I’ll certainly blog more on my experiences in doing that. As well as the total “HomeOps” concept that slowly begins emerging before my eyes. Clearly with the goal of a home IT that can rise like a phoenix from its ashes.

I’m not saying this whole HomeOps idea is a “wooza brand new concept”. Or “something big”. Or “something different”. I just find it useful to give things a name to talk and discuss about them.

Limited winbind usability with Samba 4

Almost exactly a year ago the first official Samba 4 release saw the light of the world, bringing with it Active Directory Domain Controller support as one of its biggest merits. All relevant Windows APIs had been implemented, thus allowing for all user management to be done through Windows tools such as the “Active Directory Users and Computers” MMC console.

This does of course wake the appetite of moving all users into the AD and let the Linux system authenticate against it as well, a scenario that has been supported through the use of Samba’s winbind for some time now.

As the new “samba” master binary coordinates the other daemons itself, there is no need to start winbindd manually any more. Editing /etc/nsswitch.conf as follows:

passwd: compat winbind
group: compat winbind

makes AD user accounts become visible to the system:

# getent passwd
vscan:x:65:487:Vscan account:/var/spool/amavis:/bin/false
fetchmail:x:486:2:mail retrieval daemon:/var/lib/fetchmail:/bin/false

Note how this output shows two things:

  • “winbind use default domain = yes” does not work: user names are returned including the Samba domain name.
  • Setting “template homedir” does not work: in the example above, it was set to /home/%U, of course, but the “%U” placeholder does not get replaced. Strangely, even if you configure the default values, /home/%D/%U, this won’t work. Comment out the option completely and that very default will work.

Unfortunately, this effectively makes Samba 4 (tested with version 4.1.2 to be precisely) currently quite unusable for the intended purpose.

The first issue has already been reported as Bugzilla #9780. For the second issue there are at least two tickets, Bugzilla #9839 and Bugzilla #9898. According to a comment in the former, the winbindd used in Samba 4 misses support for these placeholders and requires replacement by a combined (Samba 3/Samba 4) winbindd implementation. I do not know of any roadmap for that.

Hello Intel, thanks for shutting down your mainboard business

Three weeks ago I dared to flash the BIOS of my home server’s Intel DQ77KB mainboard to address a number of smaller issues related to BIOS settings not being applied. Apart from the fact that anno 2013 Intel still requires you to create a bootable USB stick, a process still complicated enough it makes you wonder how companies seriously expect users to be able to master it, where other companies such as Asus have been including the flash utility as part of the BIOS itself for a long time: the flash utility indicated success but afterwards the board was dead as a brick. POST beep codes indicated memory trouble but neither changing memory modules nor clearing CMOS and trying the BIOS recovery facilities helped — the board was a case for technical service. Tough luck to have no mailsystem any more, especially if you have dental surgery scheduled the next day.

But here comes the real amusing part: despite Intel’s announcement to only slowly exit desktop mainboard production, the DQ77KB was no longer available anywhere in Germany. Intel silently shut down everything so quick that a product of the current lineup faces supply problems as confirmed in Intel’s forums and by commercial sourcers. In my eyes a shame for a leading technology company.

In my case I was forced to switch to the current Haswell generation because the market offers no other Thin Mini-ITX mainboards with 19V power supply and Q77 chipset. I went for the Asus Q87T mainboard, a decision I did not regret. I can only recommend this board — finally we begin to see some real advantages of an UEFI BIOS.

python-netsnmpagent 0.5.0 released

python-netsnmpagent version 0.5.0 has just been released.

This release mainly brings new features:

  • Support for detecting connection establishment/failure in spite of net-snmp API limitations.
  • Support for custom net-snmp log handlers.
  • Export module’s version to enable version checks.

Ways to get the software:

  • As usual, the source is available at the GitHub repo.
  • The source distribution .tar.gz for this release can be downloaded from the PyPI page.
  • You can either build binary RPMs for your local (SUSE) distribution yourself (download and make rpms) or pick them up from my Open Build service project — just click on the Repositories tab and one of the Go to download repository links.

net-snmp API and connection error handling

net-snmp has a strange API that does not seem to allow us to detect errors while trying to connect to the master snmpd instance.

When playing around with python-netsnmpagent, create a copy of named and modify as follows:

agentXsocket tcp:localhost:9000

or similar. Do not change the python line by intent.

Running will give you an output such as:

* Starting the simple example agent...
Warning: Failed to connect to the agentx master agent (/tmp/simple_agent.deBvcPS2kl/snmpd-agentx.sock): Registered SNMP objects in Context "": 
[...] Serving SNMP requests, press ^C to terminate

So the only indication that our could in fact not connect to the master snmpd‘s AgentX socket is the Warning: line which we only see because we currently enable logging to stderr (the comment is wrong) in python-netsnmpagent:

# FIXME: log errors to stdout for now

Actual connection establishment is triggered within python-netsnmpagent’s start() method. You might be fooled to believe that I simply forgot some error handling here:

def start(self):
    """ Starts the agent. Among other things, this means connecting
        to the master agent, if configured that way. """
self._started = True

But look at net-snmp itself. From include/net-snmp/library/snmp_api.h:

void            init_snmp(const char *)

So there is no returning of error conditions. Why did they design their API like this?

Analyzing further, if you look at the implementation in snmplib/snmp_api.c (line 808 for current net-snmp 5.7.2), you will see various function calls for which no error handling whatsoever can be found. And mind you, this is C, so we have no exception system.

In our case, all we got was the error message logged to stdout. Grepping the net-snmp sources will lead you to agent/mibgroup/subagent.c line 856 (for net-snmp 5.7.2). This is from the subagent_open_master_session function:

agentx_socket = netsnmp_ds_get_string(NETSNMP_DS_APPLICATION_ID,
t = netsnmp_transport_open_client("agentx", agentx_socket);
if (t == NULL) {
     * Diagnose snmp_open errors with the input
     * netsnmp_session pointer.  
    if (!netsnmp_ds_get_boolean(NETSNMP_DS_APPLICATION_ID,
                                NETSNMP_DS_AGENT_NO_CONNECTION_WARNINGS)) {
        char buf[1024];
        snprintf(buf, sizeof(buf), "Warning: "
                 "Failed to connect to the agentx master agent (%s)",
                 agentx_socket ? agentx_socket : "[NIL]");
        if (!netsnmp_ds_get_boolean(NETSNMP_DS_APPLICATION_ID,
                                    NETSNMP_DS_AGENT_NO_ROOT_ACCESS)) {
            netsnmp_sess_log_error(LOG_WARNING, buf, &sess);
        } else {
            snmp_sess_perror(buf, &sess);
    return -1;

So whatever we originally passed in as mastersocket ends up here as agentx_socket. If t == NULL, the connect failed (ie. invalid mastersocket or snmpd not running). Then unless the NETSNMP_DS_AGENT_NO_CONNECTION_WARNINGS flag was set, we generate the error message and either use netsnmp_sess_log_error or snmp_sess_perror to make it visible. And: we return -1. So from this perspective connection failure is detected.

However, looking further who calls subagent_open_master_session we’ll eventually end up here (line 96):

subagent_startup(int majorID, int minorID,
                             void *serverarg, void *clientarg)
    DEBUGMSGTL(("agentx/subagent", "connecting to master...\n"));
     * if a valid ping interval has been defined, call agentx_reopen_session
     * to try to connect to master or setup a ping alarm if it couldn't
     * succeed. if no ping interval was set up, just try to connect once.
    if (netsnmp_ds_get_int(NETSNMP_DS_APPLICATION_ID,
                           NETSNMP_DS_AGENT_AGENTX_PING_INTERVAL) > 0)
        agentx_reopen_session(0, NULL);
    else {
    return 0;

Depending on whether an AgentX ping interval was configured or not, it will either let agentx_reopen_session retry forever or just call subagent_open_master_session itself once. But as you can see: no checking of return codes, no further error handling.

What’s the context of subagent_startup itself? subagent_init, which itself does return code checking, registers it as a callback function in line 158 so that it is executed after the SNMP configs have been read:

                       subagent_startup, NULL);

Of course, if subagent_startup would returned an error code, who would be the one to take action on it? Seeing that its direct caller is merely generic callbacks code. Yet the question remains why the authors had to defer calling subagent_startup through the callback system at all, ie. why not trigger config reading and call it directly?

In either case, the way is has been implemented so far, it seems to be impossible for subagents to detect connection failures :(

python-netsnmpagent 0.4.6 released

python-netsnmpagent version 0.4.6 has just been released.

This is mainly a bugfix release with no new functional features in netsnmpagent itself:

  • With net-snmp 5.4.x, strings used to be limited in length to their initial value, an update with a longer string would truncate these. Credits for tracking down the issues behind the bug go to Max “mk23″ Kalika.
  • A new subdir examples was created, example_agent renamed to simple_agent and a new second example named threading_agent was added, that demonstrates how one can use Python’s threading module to allow for asynchronous data updates.
  • All examples now use the clone’s local netsnmpagent copy instead of a possibly already installed system-wide one.
  • More explicit advertising for alternative AgentX transports, eg. TCP. We did already support these before, though.

Choices to get the software:

  • As usual, the source is available at the GitHub repo.
  • The source distribution .tar.gz for this release can be downloaded from the PyPI page.
  • You can either build binary RPMs for SuSE distributions yourself (download and make rpms) or pick them up from my Open Build service project — just click on the Repositories tab and one of the Go to download repository links.