Making Samba users available locally to Linux systems

In the past, we used to integrate Samba and “native” Linux users by using a single password backend, often LDAP:

User authentication of Linux system and Samba users against LDAP

User authentication of Linux system and Samba users against LDAP

This lead to several moments of pain, from the deficiencies surrounding group membership, the NIS and the rfc2307bis schema (see eg. here and here), over the need to define scripts to execute administrative actions such as adding users, to the not really particularly intuitive LDAP server setup in newer OpenLDAP versions.

Samba has for quite some time offered an alternative solution, by allowing for two separate user databases, the Linux passwd/shadow one and its own, and providing the Linux system with access to its user database through the combination of the winbind daemon and suitable PAM and NSS modules:

User authentication of Linux system and Samba users against LDAP

User authentication of Linux system and Samba users against LDAP

However, as I pointed out before, the Samba4 version of winbind currently lacks some important functionality.

Luckily, this Samba wiki page points out two alternatives (don’t be fooled by this wiki page, which currently only one):

Note that nslcd/pam_ldap/nss_ldap is not PADL’s now-considered dead pam_ldap / nss_ldap but a fork/rewrite. Even this one, however, draws controversy.

Yet, I found it hard to make a decision in the light of rather opinionated information on the Web and rather sparse information on the Samba Wiki. So I tried to come up with a comparison table myself, focussing on integration with Samba 4 and Active Directory features. Note that the table comes with a level of uncertainty, feel free to send me corrections.

sssd / pam_sss / nss_sss (1.11.3) nslcd / pam_ldap / nss_ldap (0.9.2)
Supports unencrypted connections via plain LDAP? Yes Yes
Supports encrypted connections via Kerberos? Yes Yes
SASL required? No Yes
Requires explicit Kerberos ticket renewal eg. through background k5start process? No Yes
Retrieval of POSIX data (UID, GID, home directory, login shell) from a Active Directory provisioned with –rfc-2307 option Optional (if AD provider is used), Required (if pure LDAP provider is used) Required
Separate backends for user/group information and authentication Yes No (but different LDAP sources)
Host must be joined to the domain? No (but advantageous in certain scenarios) No
Supports Site-based discovery? Yes No
Supports Active Directory’s Global Catalog? Yes No
Can resolve Active Directory’s global and universal groups? Yes No
Can resolve group members from trusted domains? Yes No
Leverages Active Directory’s tokenGroups attribute? Yes (also in addition to POSIX attributes) No
Offline authentication possible? Yes Yes?
Latest release 2013 2013

The conclusion: nslcd is a choice for pure LDAP authentication, for Active Directory scenarios one should go with sssd.

Unfortunately, my $distro still shipped with sssd 1.9.5 and you really do want 1.10 or newer because of added features in its Active Directory identity provider. While is it possible to access AD via pure LDAP and with or without Kerberos, you really should use the Active Directory provider because of the reasons given above. See also Jakub Hrozek’s blog and his FreeIPA.org presentation for additional information on the new features.

So I built and packaged sssd (and Samba) myself — and it took me just a month…

Configuration of sssd for Active Directory will be continued in the next post.

Why Puppet should ship with official modules

Much has certainly already been said about Puppetforge. A year ago, we were promised at Puppetcamp Nuremburg that Puppetforge was likely to improve to a more usable level. But as of now, Puppetforge is much like Github: unless you already know where to look, what to take, you’re pretty much left to your own, to trial and error. Puppetforge does not really help in making decisions on which modules to try. Yes, it is a central location to shop for Puppet modules. But it’s more like a big DropBox.

Of course, other software artefacts have the same or worse problems. For example, there is no central location for C/C++ libraries, so you wouldn’t even know where to begin looking safe for hoping to use the right Google keywords. Still, certain projects such as boost enjoy a certain popularity due to adoption by
known software projects or word of mouth. But the difference is: such libraries, enjoy a much different level of attention than any particular Puppet module. I’ll have a hard time promoting my particular implementation of, say, a ntp module, when there are twenty others.

In “appstores”, such as Google’s Play store, there are facilities that can give at least some advice as to which apps are worth trying. There is a download count, but considered alone it does not indicate quality. After all nothing keeps larger groups of people from making poor decisions lemming-style. That’s why customer reviews (hopefully) provide additional insights, although these could be subject of manipulations more or less easily.

Puppetforge has download counters, but it doesn’t have a comment system. The only way to assess whether this or that module could be a better candidate is the download count and, as a special case possibly the author: Puppet modules published by Puppetlabs themselves might be considered official, popular, well-tested.

That, however, upon further proof stands as a mere assumption. And it leads me to a question that may sound stupid but inevitably comes to my mind:

Why is there no official Puppet modules distribution?

Basically, the Puppet download alone is pretty much useless until you augment it with Puppet modules. To do that, I can go to Puppetforge and make the experience described above. I fail to see the sense behind that.

Yes, I can see that, just because Puppet Labs puts their label on a module, it does not make it automatically better. But that’s not the point, the same argument would hold true for modules published on Puppetforge, as well.

Shipping Puppet with a set of included, “default” Puppet modules would instead have a signaling effect. It would make clear on what code developers should focus, where patches for improvement should be made against. It is not so much about having the best solution. It is about stepping forward and filling a void that can seriously hinder Puppet adoption.

You’ve always been a DevOps at home – sort of

In a previous post, I was making a call for HomeOps, the application of DevOps principles to SOHO (small and home office) scenarios as well. I’ve listed a number of arguments. Here’s another one.

You have been practising sort of a DevOps approach at home already since the very beginning.

Or to be more precise: while you may not have been exercising the professional tools and methods we have to come to associate with a DevOps mindset, you will have nevertheless applied the gap-bridging mindset.

This claim includes three aspects:

  1. that it takes a Dev
  2. that it takes an Op
  3. that you’ve fulfilled both of these roles simultaneously

Let’s begin with the second point. Trivially what you’re doing at home is fulfilling the role of an Op. You might not have a dedicated monitoring system in place that sends you manager on your pager but you’ll actually have a much more advanced monitoring instance that will ring on your mobile the very moment your file service goes down: your family. (Or, if we consider the SO in SOHO, your colleagues.) If you’re reading this article, you’re likely to be the one in charge to get that file storage back up running. If not for them, then out of your own interest. If that’s not Ops, then what? Yes, you do not have the professional tools known from your employer, but as I set, I’m talking about the mindset.

And you’ve been a Dev. Still enjoying the traps of a distribution upgrade? I mentioned the example of Berkeley DB updates that required converting of on-disk databases, of course _after_ installing the new OS release. Ancient times, you say? Think about updating from Apache 2.2 to Apache 2.4.

These are examples that all require an intervention. Of course, in the event of n=1 servers, the ops guy in you might be tempted to do the necessary steps manually, in lieu of a “fire and forget” perspective. But then again, why would you have to do the same steps as thousands of other IT professionals? After all, the update pain is roughly the same. So why not let ONE of us, or more exactly the dev guy in one of us, do the work in a manner that can be reused by us all for collective advantage?

The last point is rather trivial, too. I don’t know if you’ve delegated IT responsibility to members of your family or your office, but in all SOHO scenarios I’ve met there is exactly one guy proficient enough to handle all IT business. And that guy does not exactly divide himself between the dev half and the op half. The dev in him does not meet his goals if the dev in him does not meet his ones. For the very reason that their goals are, in the end, pretty much identical.

Actually, when I matured from my a young boy’s perspective influenced by home computing to understanding the way business IT is structured, I’ve never really understood the divide between devs and ops that seems to dominate the corporate world. But then again, that must be due to someone who thought it would be benificial to divide responsibility.

HomeOps: A call for the application of Devops principles at home, too

We all have sort of gained some kind of experience in the IT world, in the way IT works and especially the way it doesn’t work. As an IT professional following recent trends and developments (you do continously keep an eye on them, right?), you will certainly have learned about (or at least heard of) DevOps principles, an ever-growing call for a different mindset on development, operations and silos in companies in general.

Fostered by events such as the series of DevOpsDays conferences spreading around the world, the term “DevOps” has reached a state where it is not only widely misunderstood but also abused to a point where you have to be extra sceptical (a mere buzzword for HP, IBM products, an obscure job definition that distracts from the real challenge).

Many of us already live DevOps principles or at least parts of them if they’re lucky enough to work in a sufficiently agile and wary organization, be it for rather common or specific reasons. Now one reason to especially justify the use of automation as but one DevOps ingredient will certainly be that, put casually, “there is no possible way we would have the time to rebuild this setup manually”. The perseverative Cloud trend practically dictates a more rational approach. So far, so good.

But recently my home server died. That is, the machine at home running mail, file and print services. And guess what? There was no possible way that I had time to rebuild this setup manually!

This is a call for what I call “HomeOps” (in the abscence of a more useful name — HomeDevOps? SOHOOps?). With “HomeOps”, I call for the extension of DevOps concepts to our home IT as far as possible.

Think about it: it is centuries ago that IT people had to deal with the management of IT entities at work only. The times where “Sysadmins” referred to “operating that mainframe at the company” and IT activities at home were limited to comparatively simple home computers are long, long gone.

Nowadays, professional IT staff effictively always deals with two, if not three networks at the same time:

  1. The company’s IT infrastructure.
  2. The (hopefully) always available Internet and our smartphones and tablets
  3. One or more computers, tablets, NAS boxes, “smart” devices such as flatscreens with Internet access, Wifi access points and a router supplying the Internet connectivity at home.

For #1, we’ve learned to apply a DevOps mindset, automation tools, continous X concepts etc., as discussed above.

For #2, I’m not talking about a need for maintenance of mobile networks, I’m talking about the implications of using a Smartphone and the Apps on it. Ever tried to backup your Android phone’s data? Luckily, Google can take care of the most important aspects such as phone numbers, calendar data etc. by promoting Cloud storage. If you trust Google, that is.

For #3, we do what? Face it:

  • Your laptop may have come pre-installed and you may use that installation. As an IT professional, you most probably don’t. How often do you reinstall and how much time does it cost?
  • You may be a Mac user and use Apple’s Time Capsule or you may use a Synology/QNAP/Thecus/Whatever NAS device that gives you a fancy GUI and makes the setup real easy. But what’s a backup worth that does not get controlled? Do you currently really actually monitor its hard disks?
  • Your Internet router may come preconfigured by your ISP. Even if it does, how much fun is configuring port forwarding?

These are, of course, just examples and some of them may apply to your home scenario, some not. In my case, for example, a NAS alone would not be enough, I run a NAS-like device but with an ordinary x86 Linux distribution. Which means it is just as much of an instance that needs management as your cloud VM no. 83424, except that you’d manage it manually. But why?

The key question is: why for heaven’s sake should we do things differently at home?

Of course we know the answer: because the efforts necessary do not seem worth the advantages. And this is where I have come to the conclusion to disagree:

  • Yes, it is “just those few devices”. But the number of devices says nothing about their personal significance to your daily life. If you need to access urgent data, eg. to reply to the tax payer’s office, and the filesystem holding it is not available, have fun!
  • Yes, you may have backed up your data somewhere in the Cloud. This does not mean that setting up things at home in a recovery scenario just became a piece of cake.
  • Don’t be fooled by looking at disaster recovery scenarios (eg. failing hard disks) only. There is one thing that you’re guaranteed to do more often and that is software updates. Unless you’re using a long time support Linux distribution, you’ll be just as much a victim to its software lifecycle as in a company. Compare installing Ubuntu 29 and configuring everything by hand to installing Ubuntu 29 and running your favorite config management tool which has just received Ubuntu 29 support by someone who needed it as well.
  • And, to some probably the strongest argument of all: how does “Some initial work now, much less work later on” sound to your wife, your spouse, your kids? Does your wife have understanding why home IT is perceived as being of less stability than eg. the telephone service?
  • Last not least, playing at home with the same tools you use at work certainly won’t hurt in gaining additional experience.

Yes, I probably won’t go as far and set up a continous deployment toolchain at home (although one could even think about that). But I’m currently automating my home server with Puppet and I’ll certainly blog more on my experiences in doing that. As well as the total “HomeOps” concept that slowly begins emerging before my eyes. Clearly with the goal of a home IT that can rise like a phoenix from its ashes.

I’m not saying this whole HomeOps idea is a “wooza brand new concept”. Or “something big”. Or “something different”. I just find it useful to give things a name to talk and discuss about them.

Limited winbind usability with Samba 4

Almost exactly a year ago the first official Samba 4 release saw the light of the world, bringing with it Active Directory Domain Controller support as one of its biggest merits. All relevant Windows APIs had been implemented, thus allowing for all user management to be done through Windows tools such as the “Active Directory Users and Computers” MMC console.

This does of course wake the appetite of moving all users into the AD and let the Linux system authenticate against it as well, a scenario that has been supported through the use of Samba’s winbind for some time now.

As the new “samba” master binary coordinates the other daemons itself, there is no need to start winbindd manually any more. Editing /etc/nsswitch.conf as follows:


passwd: compat winbind
group: compat winbind

makes AD user accounts become visible to the system:


# getent passwd
[...]
vscan:x:65:487:Vscan account:/var/spool/amavis:/bin/false
fetchmail:x:486:2:mail retrieval daemon:/var/lib/fetchmail:/bin/false
BS3\Administrator:*:0:100::/home/%U:/bin/bash
BS3\Guest:*:3000011:3000012::/home/%U:/bin/bash

Note how this output shows two things:

  • “winbind use default domain = yes” does not work: user names are returned including the Samba domain name.
  • Setting “template homedir” does not work: in the example above, it was set to /home/%U, of course, but the “%U” placeholder does not get replaced. Strangely, even if you configure the default values, /home/%D/%U, this won’t work. Comment out the option completely and that very default will work.

Unfortunately, this effectively makes Samba 4 (tested with version 4.1.2 to be precisely) currently quite unusable for the intended purpose.

The first issue has already been reported as Bugzilla #9780. For the second issue there are at least two tickets, Bugzilla #9839 and Bugzilla #9898. According to a comment in the former, the winbindd used in Samba 4 misses support for these placeholders and requires replacement by a combined (Samba 3/Samba 4) winbindd implementation. I do not know of any roadmap for that.

Hello Intel, thanks for shutting down your mainboard business

Three weeks ago I dared to flash the BIOS of my home server’s Intel DQ77KB mainboard to address a number of smaller issues related to BIOS settings not being applied. Apart from the fact that anno 2013 Intel still requires you to create a bootable USB stick, a process still complicated enough it makes you wonder how companies seriously expect users to be able to master it, where other companies such as Asus have been including the flash utility as part of the BIOS itself for a long time: the flash utility indicated success but afterwards the board was dead as a brick. POST beep codes indicated memory trouble but neither changing memory modules nor clearing CMOS and trying the BIOS recovery facilities helped — the board was a case for technical service. Tough luck to have no mailsystem any more, especially if you have dental surgery scheduled the next day.

But here comes the real amusing part: despite Intel’s announcement to only slowly exit desktop mainboard production, the DQ77KB was no longer available anywhere in Germany. Intel silently shut down everything so quick that a product of the current lineup faces supply problems as confirmed in Intel’s forums and by commercial sourcers. In my eyes a shame for a leading technology company.

In my case I was forced to switch to the current Haswell generation because the market offers no other Thin Mini-ITX mainboards with 19V power supply and Q77 chipset. I went for the Asus Q87T mainboard, a decision I did not regret. I can only recommend this board — finally we begin to see some real advantages of an UEFI BIOS.

python-netsnmpagent 0.5.0 released

python-netsnmpagent version 0.5.0 has just been released.

This release mainly brings new features:

  • Support for detecting connection establishment/failure in spite of net-snmp API limitations.
  • Support for custom net-snmp log handlers.
  • Export module’s version to enable version checks.

Ways to get the software:

  • As usual, the source is available at the GitHub repo.
  • The source distribution .tar.gz for this release can be downloaded from the PyPI page.
  • You can either build binary RPMs for your local (SUSE) distribution yourself (download and make rpms) or pick them up from my Open Build service project — just click on the Repositories tab and one of the Go to download repository links.

net-snmp API and connection error handling

net-snmp has a strange API that does not seem to allow us to detect errors while trying to connect to the master snmpd instance.

When playing around with python-netsnmpagent, create a copy of run_simple_agent.sh named test.sh and modify as follows:

agentXsocket tcp:localhost:9000

or similar. Do not change the python simple_agent.py line by intent.

Running test.sh will give you an output such as:

* Starting the simple example agent...
Warning: Failed to connect to the agentx master agent (/tmp/simple_agent.deBvcPS2kl/snmpd-agentx.sock): 
simple_agent.py: Registered SNMP objects in Context "": 
[...]
simple_agent.py: Serving SNMP requests, press ^C to terminate

So the only indication that our simple_agent.py could in fact not connect to the master snmpd‘s AgentX socket is the Warning: line which we only see because we currently enable logging to stderr (the comment is wrong) in python-netsnmpagent:

# FIXME: log errors to stdout for now
libnsa.snmp_enable_stderrlog()

Actual connection establishment is triggered within python-netsnmpagent’s start() method. You might be fooled to believe that I simply forgot some error handling here:

def start(self):
    """ Starts the agent. Among other things, this means connecting
        to the master agent, if configured that way. """
self._started = True
libnsa.init_snmp(self.AgentName)

But look at net-snmp itself. From include/net-snmp/library/snmp_api.h:

NETSNMP_IMPORT
void            init_snmp(const char *)

So there is no returning of error conditions. Why did they design their API like this?

Analyzing further, if you look at the implementation in snmplib/snmp_api.c (line 808 for current net-snmp 5.7.2), you will see various function calls for which no error handling whatsoever can be found. And mind you, this is C, so we have no exception system.

In our case, all we got was the error message logged to stdout. Grepping the net-snmp sources will lead you to agent/mibgroup/subagent.c line 856 (for net-snmp 5.7.2). This is from the subagent_open_master_session function:

agentx_socket = netsnmp_ds_get_string(NETSNMP_DS_APPLICATION_ID,
                                      NETSNMP_DS_AGENT_X_SOCKET);
t = netsnmp_transport_open_client("agentx", agentx_socket);
if (t == NULL) {
    /*
     * Diagnose snmp_open errors with the input
     * netsnmp_session pointer.  
     */
    if (!netsnmp_ds_get_boolean(NETSNMP_DS_APPLICATION_ID,
                                NETSNMP_DS_AGENT_NO_CONNECTION_WARNINGS)) {
        char buf[1024];
        snprintf(buf, sizeof(buf), "Warning: "
                 "Failed to connect to the agentx master agent (%s)",
                 agentx_socket ? agentx_socket : "[NIL]");
        if (!netsnmp_ds_get_boolean(NETSNMP_DS_APPLICATION_ID,
                                    NETSNMP_DS_AGENT_NO_ROOT_ACCESS)) {
            netsnmp_sess_log_error(LOG_WARNING, buf, &sess);
        } else {
            snmp_sess_perror(buf, &sess);
        }
    }
    return -1;
}

So whatever we originally passed in as mastersocket ends up here as agentx_socket. If t == NULL, the connect failed (ie. invalid mastersocket or snmpd not running). Then unless the NETSNMP_DS_AGENT_NO_CONNECTION_WARNINGS flag was set, we generate the error message and either use netsnmp_sess_log_error or snmp_sess_perror to make it visible. And: we return -1. So from this perspective connection failure is detected.

However, looking further who calls subagent_open_master_session we’ll eventually end up here (line 96):

int
subagent_startup(int majorID, int minorID,
                             void *serverarg, void *clientarg)
{
    DEBUGMSGTL(("agentx/subagent", "connecting to master...\n"));
    /*
     * if a valid ping interval has been defined, call agentx_reopen_session
     * to try to connect to master or setup a ping alarm if it couldn't
     * succeed. if no ping interval was set up, just try to connect once.
     */
    if (netsnmp_ds_get_int(NETSNMP_DS_APPLICATION_ID,
                           NETSNMP_DS_AGENT_AGENTX_PING_INTERVAL) > 0)
        agentx_reopen_session(0, NULL);
    else {
        subagent_open_master_session();
    }
    return 0;
}

Depending on whether an AgentX ping interval was configured or not, it will either let agentx_reopen_session retry forever or just call subagent_open_master_session itself once. But as you can see: no checking of return codes, no further error handling.

What’s the context of subagent_startup itself? subagent_init, which itself does return code checking, registers it as a callback function in line 158 so that it is executed after the SNMP configs have been read:

snmp_register_callback(SNMP_CALLBACK_LIBRARY,
                       SNMP_CALLBACK_POST_READ_CONFIG,
                       subagent_startup, NULL);

Of course, if subagent_startup would returned an error code, who would be the one to take action on it? Seeing that its direct caller is merely generic callbacks code. Yet the question remains why the authors had to defer calling subagent_startup through the callback system at all, ie. why not trigger config reading and call it directly?

In either case, the way is has been implemented so far, it seems to be impossible for subagents to detect connection failures :(

python-netsnmpagent 0.4.6 released

python-netsnmpagent version 0.4.6 has just been released.

This is mainly a bugfix release with no new functional features in netsnmpagent itself:

  • With net-snmp 5.4.x, strings used to be limited in length to their initial value, an update with a longer string would truncate these. Credits for tracking down the issues behind the bug go to Max “mk23″ Kalika.
  • A new subdir examples was created, example_agent renamed to simple_agent and a new second example named threading_agent was added, that demonstrates how one can use Python’s threading module to allow for asynchronous data updates.
  • All examples now use the clone’s local netsnmpagent copy instead of a possibly already installed system-wide one.
  • More explicit advertising for alternative AgentX transports, eg. TCP. We did already support these before, though.

Choices to get the software:

  • As usual, the source is available at the GitHub repo.
  • The source distribution .tar.gz for this release can be downloaded from the PyPI page.
  • You can either build binary RPMs for SuSE distributions yourself (download and make rpms) or pick them up from my Open Build service project — just click on the Repositories tab and one of the Go to download repository links.

Fixing VirtualBox Guest Additions’ vboxvideo_drm.c for SUSE Linux Enterprise Server (SLES) 11 SP3

Trying to install VirtualBox‘s Linux Guest Additions under SUSE Linux Enterprise Server (SLES) 11 SP3 currently fails even with the newest VirtualBox version (4.2.16):

sles11sp3:/tmp/vbox.0.orig # make
make KBUILD_VERBOSE=1 CONFIG_MODULE_SIG= -C /lib/modules/3.0.76-0.11-default/build SUBDIRS=/tmp/vbox.0.orig SRCROOT=/tmp/vbox.0.orig modules
make[1]: Entering directory `/usr/src/linux-3.0.76-0.11-obj/x86_64/default'
make -C ../../../linux-3.0.76-0.11 O=/usr/src/linux-3.0.76-0.11-obj/x86_64/default/. modules
make -C /usr/src/linux-3.0.76-0.11-obj/x86_64/default \
	KBUILD_SRC=/usr/src/linux-3.0.76-0.11 \
	KBUILD_EXTMOD="/tmp/vbox.0.orig" -f /usr/src/linux-3.0.76-0.11/Makefile \
	modules
test -e include/generated/autoconf.h -a -e include/config/auto.conf || (		\
	echo;								\
	echo "  ERROR: Kernel configuration is invalid.";		\
	echo "         include/generated/autoconf.h or include/config/auto.conf are missing.";\
	echo "         Run 'make oldconfig && make prepare' on kernel src to fix it.";	\
	echo;								\
	/bin/false)
mkdir -p /tmp/vbox.0.orig/.tmp_versions ; rm -f /tmp/vbox.0.orig/.tmp_versions/*
make -f /usr/src/linux-3.0.76-0.11/scripts/Makefile.build obj=/tmp/vbox.0.orig
  gcc -Wp,-MD,/tmp/vbox.0.orig/.vboxvideo_drm.o.d  -nostdinc -isystem /usr/lib64/gcc/x86_64-suse-linux/4.3/include -I/usr/src/linux-3.0.76-0.11/arch/x86/include -Iarch/x86/include/generated -Iinclude  -I/usr/src/linux-3.0.76-0.11/include -include include/generated/autoconf.h   -I/tmp/vbox.0.orig -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -pipe -Wno-sign-compare -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -fno-stack-protector -fomit-frame-pointer -fasynchronous-unwind-tables -g -fno-inline-functions-called-once -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fshort-wchar -include /tmp/vbox.0.orig/include/VBox/VBoxGuestMangling.h   -I/lib/modules/3.0.76-0.11-default/build/include   -I/tmp/vbox.0.orig/   -I/tmp/vbox.0.orig/include   -I/tmp/vbox.0.orig/r0drv/linux   -I/tmp/vbox.0.orig/vboxvideo/   -I/tmp/vbox.0.orig/vboxvideo/include   -I/tmp/vbox.0.orig/vboxvideo/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DVBOX_WITH_HGCM -DLOG_TO_BACKDOOR -DIN_MODULE -DIN_GUEST_R0 -DRT_NO_EXPORT_SYMBOL -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS  -DMODULE  -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(vboxvideo_drm)"  -D"KBUILD_MODNAME=KBUILD_STR(vboxvideo)" -c -o /tmp/vbox.0.orig/.tmp_vboxvideo_drm.o /tmp/vbox.0.orig/vboxvideo_drm.c
/tmp/vbox.0.orig/vboxvideo_drm.c:121: error: unknown field ‘reclaim_buffers’ specified in initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:121: warning: initialization from incompatible pointer type
/tmp/vbox.0.orig/vboxvideo_drm.c:130: warning: braces around scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:130: warning: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:131: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:131: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:131: warning: initialization from incompatible pointer type
/tmp/vbox.0.orig/vboxvideo_drm.c:132: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:132: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:132: warning: excess elements in scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:132: warning: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:133: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:133: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:133: warning: excess elements in scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:133: warning: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:137: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:137: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:137: warning: excess elements in scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:137: warning: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:141: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:141: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:141: warning: excess elements in scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:141: warning: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:142: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:142: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:142: warning: excess elements in scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:142: warning: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:143: error: field name not in record or union initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:143: error: (near initialization for ‘driver.fops’)
/tmp/vbox.0.orig/vboxvideo_drm.c:143: warning: excess elements in scalar initializer
/tmp/vbox.0.orig/vboxvideo_drm.c:143: warning: (near initialization for ‘driver.fops’)
make[4]: *** [/tmp/vbox.0.orig/vboxvideo_drm.o] Error 1
make[3]: *** [_module_/tmp/vbox.0.orig] Error 2
make[2]: *** [sub-make] Error 2
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/linux-3.0.76-0.11-obj/x86_64/default'
make: *** [vboxvideo] Error 2

This error happens for the same reason that was reported in virtualbox.org bug #11586: Red Hat is obviously not the only company to back-port DRM code from newer Linux versions to their enterprise kernels, SUSE did so, too.

Thus, vboxvideo_drm.c needs additional logic as implemented in this patch:

Because SUSE, unlike Red Hat, for some reason no longer provides SLE_VERSION and SLE_VERSION_CODE macros in SLES11, we need to retrofit them ourselves. I stole the appropriate code from the “igb” network driver.

I filed a bug report as virtualbox.org bug #11984. Until the VirtualBox folks have come around to look at the issue and incorporate a fix (hopefully mine :), you may use a patched version of VBoxLinuxAdditions.run that I created using the makeself utility for your convenience:

Run this instead of the original and installation should work smoothly even under SLES11 SP3.