Apr 17 2007
A couple of days ago, Microsoft released a security bulletin regarding a vulnerability in the DNS server component of Windows Server 2000 and 2003. In it, a remote attacker can cause the DNS server system service to spawn a shell that one can then connect to and execute commands because there is a bug in the RPC (Remote Procedure Call) interface. Ordinarily, Windows is designed to be operated from the GUI that we all know and love, but if you open a command shell, there's an excellent suite of command line utilities that can perform the same operations, usually much faster. There is also a third way of controlling system services remotely (RPC), which is a way of calling the administrative functions of a daemon or service remotely with a management client of some kind. At any rate, a security researcher read the security advisory and put together exploit code for the vulnerability, which isn't actually out of the ordinary for the information security community. An exploit of some kind has to be written to prove that the vulnerability is both real and exploitable, and they are legitimately used to test one's machines for the presence or absence of the bug. On the other hand, those utilities can also be used to remotely compromise systems, which is what people usually think of them. It's the 'on the other hand' case that's important here. A couple of groups of crackers have developed their own exploits for the bug and now there is a worm making its rounds that is compromising machines and infecting them with copies of itself. The worm is designated 'Rinbot', and is considered an active threat at this time. The problem is this: If you're going to have a domain on the Net you need to have at least two DNSes exposed and answering requests. Some DNS admins either don't turn off the RPC interfaces of the DNS service, or don't use a firewall to filter connections to the port the RPC interface is listening on, which leaves them open to attack.
What it boils down to is this: They used a security bulletin that is pretty thin on hard information and a lot of skull-sweat to not only write exploits but self-replicating malware, and some people are questioning if Microsoft had released too much information. Some wonder if they should have told anyone at all, and silently pushed out a patch that would fix the bug. There's a problem with such a strategy, however - it's never worked in the past. In the 80's, information security was something that very few people paid attention to, even if they knew what it was. In the 90's bugfixes and patches were slow in coming, and often never came. Security advisories were also very rare during that time, official or otherwise. Official security alerts didn't come very often, and groups of crackers in the underground sat on them as long as they could to get the most usage possible out of their 0-day exploits.
More's the point, they didn't need the patches, or even security alerts to find bugs, they found them by reverse engineering binary code, sniffing network protocols,and plain old screwing around to see what would happen. Just like white hats do, in fact. The major difference is that one group's works are called 'research' and the other's are called 'trouble' by dint of what they do with their discoveries.
This would have happened regardless of whether or not Microsoft had posted an advisory about the bug; this is not a unique event. As a certain song would have it, "This has all happened before, and it'll all happen again."