This must be "The Daily WTF of the Year":
From "the global leader in providing supply chain execution and optimization solutions": One View to Rule Them All. See the underlying SQL-Code.
One of the most absurd things I have ever seen: Subselects, which contain subselects, which contain subselects, which contain aggregation functions. Also, the DISTINCT clause is certainly beneficial in regard to performance - especially on a view that might return every potential row, with close to 400 columns!
The unfortunate maintenance programmer explains:
"I was further astounded to learn that timeouts on certain critical operations were *routine*! [...] Paging through the trace, I found a stored procedure which took 505306 milliseconds - that's 8.5 minutes (!!!) - to execute, at 45% server utilisation."
Wednesday, December 29, 2004
Tuesday, December 28, 2004
Digital Fortress
I read Dan Brown's Digital Fortress during Christmas holidays. The book has an interesting plot, and the story takes some surprising twists, so it was in no sense boring. The problem is, it is just quite flawed from a technical point of view - and this really confines the reading experience for the average techie with some basic knowledge about cryptography.
I don't even want to start ranting about the unrealistic showdown, when a software worm takes down the NSA's security tiers one by one, and the agency's director decides to take the risk, instead of simply shutting down the system. Or the fact that the leading character, IQ-170 wonder-mathematician Susan Fletcher does not even grasp the most obvious coherences. Or that their massive-parallel miracle-system goes up in flames due to overheating (No heating ventilation? No emergency shutdown? Using NMOS, or what? And no backups and no redundant datacenter?). Let's just examine one of the book's main area of interest, namely cryptographic issues:
The so called Digital Fortress algorithm (which is able to resist brute force code-breaking attempts) is published on the internet, encrypted BY ITSELF (the main storyline is about the chase for the cipher key). The highest bidder will receive the key, hence will own the algorithm. Now wait a second - in order to decrypt it, he needs to know the algorithm already, right? Hmmm, makes you wonder how he should get hands on the algorithm, as it is only available in its encrypted form? Or put it the other way: let's suppose this is all possible, then the bidder HAS that algorithm already in cleartext at that point in time - before he is actually going decrypt it. So there is no reason for purchasing the key in the first place.
Another gemstone: "To TRANSLTR [the decryption machine] all codes looked identical, regardless of which algorithm wrote them". I am amazed how they decrypt something without any knowledge about the underlying algorithm. Anyway, no modern encryption standard depends on having to keep its algorithm secret. Keeping the key secret is the only thing that counts.
"Public Key Encryption" is being described as "software that scrambles personal e-mail message in such a way that they were totally unreadable. [...] The only way to unscramble the message was to enter the sender's pass-key". OK, this explanation is confusing, but the author also misses the main point here. Public/private keypairs resp. asymmetric encryption solves the problem of key distribution (e.g. for exchanging a short-lived, temporary symmetric key - symmetric encryption outperforms asymmetric encryption by far, hence is much more suitable for higher data volume and/or server applications), and enables sender authentication and message integrity. The receiver's public key is used for encryption by the sender, so that the counterpart private key (used for decryption) remains with the receiver exclusively. Accordingly the sender applies his private key for signing the message, so the receiver can verify the signature and the message's integrity using the sender's public key.
The author also mixes up key length and key space. He states: "... the computer's processors auditioned thirty million keys per second - one hundred billion per hour. If TRANSLTR was still counting [for 15 hours], that meant the key had to be enormous - over ten billion digits long." OK, 1,5 * 1012 keys in 15 hours, even if we are talking about binary digits here, 41bit keys are enough to cover that range. Not "over ten billion digits". This also contradicts the statement that TRANSLTR breaks 64bit keys in about 10 minutes.
In the book's foreword, the author says thanks to "the two faceless ex-NSA cryptographers who made invaluable contributions via anonymous remailers". Makes me wish they would have proof-read it once.
I don't even want to start ranting about the unrealistic showdown, when a software worm takes down the NSA's security tiers one by one, and the agency's director decides to take the risk, instead of simply shutting down the system. Or the fact that the leading character, IQ-170 wonder-mathematician Susan Fletcher does not even grasp the most obvious coherences. Or that their massive-parallel miracle-system goes up in flames due to overheating (No heating ventilation? No emergency shutdown? Using NMOS, or what? And no backups and no redundant datacenter?). Let's just examine one of the book's main area of interest, namely cryptographic issues:
The so called Digital Fortress algorithm (which is able to resist brute force code-breaking attempts) is published on the internet, encrypted BY ITSELF (the main storyline is about the chase for the cipher key). The highest bidder will receive the key, hence will own the algorithm. Now wait a second - in order to decrypt it, he needs to know the algorithm already, right? Hmmm, makes you wonder how he should get hands on the algorithm, as it is only available in its encrypted form? Or put it the other way: let's suppose this is all possible, then the bidder HAS that algorithm already in cleartext at that point in time - before he is actually going decrypt it. So there is no reason for purchasing the key in the first place.
Another gemstone: "To TRANSLTR [the decryption machine] all codes looked identical, regardless of which algorithm wrote them". I am amazed how they decrypt something without any knowledge about the underlying algorithm. Anyway, no modern encryption standard depends on having to keep its algorithm secret. Keeping the key secret is the only thing that counts.
"Public Key Encryption" is being described as "software that scrambles personal e-mail message in such a way that they were totally unreadable. [...] The only way to unscramble the message was to enter the sender's pass-key". OK, this explanation is confusing, but the author also misses the main point here. Public/private keypairs resp. asymmetric encryption solves the problem of key distribution (e.g. for exchanging a short-lived, temporary symmetric key - symmetric encryption outperforms asymmetric encryption by far, hence is much more suitable for higher data volume and/or server applications), and enables sender authentication and message integrity. The receiver's public key is used for encryption by the sender, so that the counterpart private key (used for decryption) remains with the receiver exclusively. Accordingly the sender applies his private key for signing the message, so the receiver can verify the signature and the message's integrity using the sender's public key.
The author also mixes up key length and key space. He states: "... the computer's processors auditioned thirty million keys per second - one hundred billion per hour. If TRANSLTR was still counting [for 15 hours], that meant the key had to be enormous - over ten billion digits long." OK, 1,5 * 1012 keys in 15 hours, even if we are talking about binary digits here, 41bit keys are enough to cover that range. Not "over ten billion digits". This also contradicts the statement that TRANSLTR breaks 64bit keys in about 10 minutes.
In the book's foreword, the author says thanks to "the two faceless ex-NSA cryptographers who made invaluable contributions via anonymous remailers". Makes me wish they would have proof-read it once.
Saturday, December 25, 2004
EPIC 2014
What will have happened to the news in the year 2014? An interesting look ahead into the future - entertaining, although not very likely if you ask me.
Friday, December 17, 2004
Microsoft
Yesterday I attended a Microsoft marketing presentation. Yes it was a cursory demo. That's obvious when you get a first glance at Whidbey, Yukon, Avalon, Indigo, Longhorn, all within one afternoon.
What impressed me most (probably because I had not seen it before) was the forthcoming Visual Studio Team System. It is scheduled about six months after Visual Studio 2005 (Whidbey), which means it will ship in about a year. I think Team System will substantially change the way we develop on the Microsoft platform. Yes, we have been working on component modeling, automated builds, static and dynamic code profiling, unit and load testing, and so on before, but this always used to require N different tools from N different vendors. But this is the first time all of this gets integrated into one big suite. Visual Studio Team System licenses are costly. But after all, it is mainly aimed for enterprise application projects.
Talking about Microsoft marketing, admitted: Microsoft has a notorious history of creating "Fear, Uncertainty and Doubt". Promising impossible shipping dates for products and product features was one common strategy, next too ruthless business tactics. IBM experienced this with OS/2, as did 3COM with LAN Manager. Fifteen years have gone by since those days, and Microsoft is still aggressive, but it also grew up. They have stayed on the top of software development companies, many say thanks to the fact that their former CEO was the only real nerd, while his competitors were lead by MBAs who just didn't understand the software business.
Microsoft has also pioneered the concept of "good enough" software - which is just another term for finding the optimal economic balance between code purity and practicality (there is nothing bad about "good enough" software, actually it turned out to be the most successful approach for many shrink-wrapped products). In my experience, Microsoft has improved a lot on quality issues (see also their Trustworthy Computing Campaign). Lately I met with several Microsoft consultants, and all of them were top-notch engineers. When you think about it, this does not come as a surprise: Microsoft always hires the smartest of the smartest. Now they are in the position to do so (50.000 job applications a month - this means they can invite the TOP 5% for interviews, and employ the TOP 1%). But even in the old days, Bill Gates always engaged Triple-A developers. B-people are scared of A-people, so when put in charge, they tend to hire more B- or even C-people, dragging down your work force's qualification level.
Microsoft bashing is a common hobby among many. Some criticize their business behaviour (even the department of justice does from time to time) - one may agree or disagree to that. But I refer to unreflected criticism, the one that has the nature of religious wars (e.g. "Windows sucks, Linux rules"). It's funny to note that this often comes from people who are the least qualified to judge. Very rarely those are Triple-A people, and just don't play in the same league as those who develop the next version of Windows or the next Microsoft development platform up in Redmond (well OK, there is always an exception to the rule: programming gods like Linus Torvalds, Bill Joy or James Gosling are allowed to complain about Microsoft ;-)). Some hacked their time around at college on university systems, hardly ever worked on a major real-life software-project, but instead preferred to build up their little fiefdoms on archaic system that no one else really cared about.
Now, badmouthing Microsoft may make the averagely talented developer look cool, at least in front of those who don't know any better. Here is my advice for all unsolicited Microsoft bashers: Please, grow up! Welcome to the real world - in a professional cooperate environment no one wants to hear your religious rants. Microsoft is there, and it is going to stay, you'd better get used to it.
An interesting fact is that our Microsoft consultants knew very well about the strengths and weaknesses of their products. E.g. they never had a problem expressing their respect for cool J2EE features or the like. And they pointed out in which areas Microsoft still has to improve.
I have been working on Unix, Java and Microsoft platforms, and I appreciate all of them. I have the highest respect for their creators. But I am tired of B- and C-people who seriously think they are in any position to fire unsolicited flames on the efforts of real talented folks at world's most successful software company, just for the sake of boosting their own crippled egos.
What impressed me most (probably because I had not seen it before) was the forthcoming Visual Studio Team System. It is scheduled about six months after Visual Studio 2005 (Whidbey), which means it will ship in about a year. I think Team System will substantially change the way we develop on the Microsoft platform. Yes, we have been working on component modeling, automated builds, static and dynamic code profiling, unit and load testing, and so on before, but this always used to require N different tools from N different vendors. But this is the first time all of this gets integrated into one big suite. Visual Studio Team System licenses are costly. But after all, it is mainly aimed for enterprise application projects.
Talking about Microsoft marketing, admitted: Microsoft has a notorious history of creating "Fear, Uncertainty and Doubt". Promising impossible shipping dates for products and product features was one common strategy, next too ruthless business tactics. IBM experienced this with OS/2, as did 3COM with LAN Manager. Fifteen years have gone by since those days, and Microsoft is still aggressive, but it also grew up. They have stayed on the top of software development companies, many say thanks to the fact that their former CEO was the only real nerd, while his competitors were lead by MBAs who just didn't understand the software business.
Microsoft has also pioneered the concept of "good enough" software - which is just another term for finding the optimal economic balance between code purity and practicality (there is nothing bad about "good enough" software, actually it turned out to be the most successful approach for many shrink-wrapped products). In my experience, Microsoft has improved a lot on quality issues (see also their Trustworthy Computing Campaign). Lately I met with several Microsoft consultants, and all of them were top-notch engineers. When you think about it, this does not come as a surprise: Microsoft always hires the smartest of the smartest. Now they are in the position to do so (50.000 job applications a month - this means they can invite the TOP 5% for interviews, and employ the TOP 1%). But even in the old days, Bill Gates always engaged Triple-A developers. B-people are scared of A-people, so when put in charge, they tend to hire more B- or even C-people, dragging down your work force's qualification level.
Microsoft bashing is a common hobby among many. Some criticize their business behaviour (even the department of justice does from time to time) - one may agree or disagree to that. But I refer to unreflected criticism, the one that has the nature of religious wars (e.g. "Windows sucks, Linux rules"). It's funny to note that this often comes from people who are the least qualified to judge. Very rarely those are Triple-A people, and just don't play in the same league as those who develop the next version of Windows or the next Microsoft development platform up in Redmond (well OK, there is always an exception to the rule: programming gods like Linus Torvalds, Bill Joy or James Gosling are allowed to complain about Microsoft ;-)). Some hacked their time around at college on university systems, hardly ever worked on a major real-life software-project, but instead preferred to build up their little fiefdoms on archaic system that no one else really cared about.
Now, badmouthing Microsoft may make the averagely talented developer look cool, at least in front of those who don't know any better. Here is my advice for all unsolicited Microsoft bashers: Please, grow up! Welcome to the real world - in a professional cooperate environment no one wants to hear your religious rants. Microsoft is there, and it is going to stay, you'd better get used to it.
An interesting fact is that our Microsoft consultants knew very well about the strengths and weaknesses of their products. E.g. they never had a problem expressing their respect for cool J2EE features or the like. And they pointed out in which areas Microsoft still has to improve.
I have been working on Unix, Java and Microsoft platforms, and I appreciate all of them. I have the highest respect for their creators. But I am tired of B- and C-people who seriously think they are in any position to fire unsolicited flames on the efforts of real talented folks at world's most successful software company, just for the sake of boosting their own crippled egos.
Sunday, December 12, 2004
Plan Your Architecture Before Choosing Your Technology
It should be obvious to every software project manager, but unfortunately it doesn't always seem to be: System architecture design PRECEDES technology decision-making. Premature, ill-fated technology decisions can bring whole projects down. Or, as Hank Rainwater states in his book "Herding Cats: A Primer for Programmers Who Lead Programmers":
"The magic bullet or golden hammer (whatever you want to call it) technology doesn't solve business problems, people do. Sure, you employ technology to implement a solution, but you are wasting time if you think buying the latest addon to your development environment is going to increase productivity."
[...]
"I encourage you to determine your architectural needs and plan a system before you choose a technology of implementation. You'll just have to do it all over again if the new whiz-bang tool doesn't pan out. You've heard it said many times: If you don't have time to do the job right, when will you have time to do it over again?"
"The magic bullet or golden hammer (whatever you want to call it) technology doesn't solve business problems, people do. Sure, you employ technology to implement a solution, but you are wasting time if you think buying the latest addon to your development environment is going to increase productivity."
[...]
"I encourage you to determine your architectural needs and plan a system before you choose a technology of implementation. You'll just have to do it all over again if the new whiz-bang tool doesn't pan out. You've heard it said many times: If you don't have time to do the job right, when will you have time to do it over again?"
Wednesday, December 08, 2004
Apache AXIS, Proxies And SSL
Some time ago I ported a subsystem from a proprietary XML-over-HTTP request/response format to webservices. The webservice client was done in Java. Now, as we were applying secure sockets (including our own local keystore for holding a client certificate), there used to be an issue with the Java Secure Socket Extension's default behaviour when tunnelling HTTP over a proxy. JSSE's default SSLSocketFactory somehow expects a "HTTP/1.0 200 OK" response (this is hardwired!), but many proxies reply with "HTTP1.1/200 OK" or "HTTP1.1/200 connection established". More on this issue on JavaWorld.
Now, we simply implemented our own SSLTunnelSocketFactory which would not be as restrictive. One can either attach it globally by invoking HttpsURLConnection.setDefaultSSLSocketFactory(), or on a per-connection basis: HttpsURLConnection.setSSLSocketFactory().
This just worked fine for HttpsURLConnections. But Apache Axis (1.1) is different. It comes with its own JSSESocketFactory implementation, which of course once again won't support the proxy's HTTP1.1-response. And it ignores the fact that we already installed our own SSLTunnelSocketFactory. I was about to patch Axis and roll out our own build, when I came across a forum post, which mentioned that Axis would allow other SocketFactories once they implement a public SocketFactory(Hashtable attributes) constructor. Actually, this constructor will never be invoked. It just needs to be there. And it works like a charm now.
Do you remember the last time you saw one of those "Three Mouseclicks To Create Your Webservice Client On (VS.NET | IBM WSAD)" presentations? Real life just ain't that easy.
Now, we simply implemented our own SSLTunnelSocketFactory which would not be as restrictive. One can either attach it globally by invoking HttpsURLConnection.setDefaultSSLSocketFactory(), or on a per-connection basis: HttpsURLConnection.setSSLSocketFactory().
This just worked fine for HttpsURLConnections. But Apache Axis (1.1) is different. It comes with its own JSSESocketFactory implementation, which of course once again won't support the proxy's HTTP1.1-response. And it ignores the fact that we already installed our own SSLTunnelSocketFactory. I was about to patch Axis and roll out our own build, when I came across a forum post, which mentioned that Axis would allow other SocketFactories once they implement a public SocketFactory(Hashtable attributes) constructor. Actually, this constructor will never be invoked. It just needs to be there. And it works like a charm now.
Do you remember the last time you saw one of those "Three Mouseclicks To Create Your Webservice Client On (VS.NET | IBM WSAD)" presentations? Real life just ain't that easy.
Tuesday, December 07, 2004
On Software Development Methodologies
Tamir Nitzan on Joel on Software:
Lastly there's MSF. The author's [annotation: Joel Spolsky's] complaint about methodologies is that they essentially transform people into compliance monkeys. "our system isn't working" -- "but we signed all the phase exits!". Intuitively, there is SOME truth in that. Any methodology that aims to promote consistency essentially has to cater to a lowest common denominator. The concept of a "repeatable process" implies that while all people are not the same, they can all produce the same way, and should all be monitored similarly.
For instance, in software development, we like to have people unit-test their code. However, a good, experienced developer is about 100 times less likely to write bugs that will be uncovered during unit tests than a beginner. It is therefore practically useless for the former to write these... but most methodologies would enforce that he has to, or else you don't pass some phase. At that point, he's spending say 30% of his time on something essentially useless, which demotivates him. Since he isn't motivated to develop aggressively, he'll start giving large estimates, then not doing much, and perform his 9-5 duties to the letter. Project in crisis? Well, I did my unit tests. The rough translation of his sentence is: "methodologies encourage rock stars to become compliance monkeys, and I need everyone on my team to be a rock star".
Lastly there's MSF. The author's [annotation: Joel Spolsky's] complaint about methodologies is that they essentially transform people into compliance monkeys. "our system isn't working" -- "but we signed all the phase exits!". Intuitively, there is SOME truth in that. Any methodology that aims to promote consistency essentially has to cater to a lowest common denominator. The concept of a "repeatable process" implies that while all people are not the same, they can all produce the same way, and should all be monitored similarly.
For instance, in software development, we like to have people unit-test their code. However, a good, experienced developer is about 100 times less likely to write bugs that will be uncovered during unit tests than a beginner. It is therefore practically useless for the former to write these... but most methodologies would enforce that he has to, or else you don't pass some phase. At that point, he's spending say 30% of his time on something essentially useless, which demotivates him. Since he isn't motivated to develop aggressively, he'll start giving large estimates, then not doing much, and perform his 9-5 duties to the letter. Project in crisis? Well, I did my unit tests. The rough translation of his sentence is: "methodologies encourage rock stars to become compliance monkeys, and I need everyone on my team to be a rock star".
Sunday, December 05, 2004
The Triumph Of Belief Systems Over Engineering
From www.zeitgeist.com (and nothing has changed ever since):
In Computer Science there's... | While in Computer Scientology it's.. | |
---|---|---|
00000 | John Von Neumann | L. Ron Hubbard |
00001 | Communications of the ACM | InformationWeek |
00010 | SMTP/MIME | Notes "Mail" |
00011 | SNMP | "E-meters" |
00100 | "Two Phase Commit" | "Automatic Data Replication" |
00101 | TCP/IP | IPX |
00110 | The Internet | Compu$erve (or, AOL) |
00111 | Usenix/LISA Conference | Novell World |
01000 | SecurID/SKey/SecureNetKey | RLA/ARA |
01001 | Distributed Systems | Windows95 |
01010 | The World Wide Web | IBM/Lotus Notes |
01011 | Object Oriented Programming | Visual Basic |
01100 | Java | ActiveX |
01101 | Linux/NetBSD/FreeBSD | Windows/NT Server |
01110 | ACM TOPLAS | "Secrets of the Visual Basic Masters" |
01111 | GNU Public License | Patent Lawyers |
10000 | Lead Developers | "Empowered Managers" |
Subscribe to:
Posts (Atom)