It is hard to believe, but Sun Microsystems released Java 1.0 almost 20 years ago, on January 23rd, 1996. I was an early adopter because I was intrigued by the write once, run everywhere promise. At the time, developing for the Mac did not look like a viable career option and yet I did not like the idea of having to switch platforms. As a result, Java seemed a great option.
Java 1.0 couldn’t do much beyond producing animated content for the browser, but it was easy to learn, mostly because it had so few libraries available at launch. In fact, this was part of the excitement, as so many basic building blocks had to be created in order to allow other developers to build more powerful applications. As an example, during Java’s early days I wrote a GUI for applets (Swing didn’t exist and AWT sucked), inspired by the classic MacOS toolkit. I called it MAE for Java. It was a lot of fun.
Over the next few years, Java grew up. In 1997, version 1.1 added JDBC and the servlet standard, which paved the way for the application server era. I discovered WebLogic at the second ever JavaOne event in San Francisco and even though the product had limited capabilities at the time, it was clear to me that the concept had a lot of potential. Java was quickly becoming a solid platform and a serious contender in the Enterprise world. Over the following years, the J2EE spec (now simply known at JEE) continued to mature in order to address an increasingly large array of IT requirements (SOA, Web development, encryption, MQ integration, O/R mapping, etc.). For those of us who got on the bandwagon early, adopting these technologies was easy. We just had to learn a couple of new APIs each year, a pace which, with hindsight, now seems quite reasonable. Everything was great. So great in fact that I credit Java for developing a whole generation of IT Architects. I am of course talking about senior professionals who usually have somewhere between 15 to 20 years of experience, not the kids just hired out of school by consulting firms and labeled “Architects” to justify higher hourly rates.
So, how can someone become an architect today? It all starts by learning the right programming language. Some languages, like Visual Basic (let’s use a dead language as an example to avoid offending anyone), are great for quickly building specialised solutions, but won’t help you with your career. I for one have never met a CTO or CIO who got his/her job after a successful and gratifying career as a Visual Basic programmer. On the other hand, Java was designed from the beginning as a general purpose language, designed to build any kind of application. Sun Microsystems, which was on a mission to conquer the world, wanted their language to be used for everything, from embedded systems to large distributed enterprise applications. To achieve that goal, they enlisted most of the IT industry leaders (IBM, SAP, Oracle, etc.) to help them provide Java developers with a large selection of rich, stable and supported APIs as well as solid developer tools like Eclipse. The results achieved by this broad industry alliance have simply been amazing. Twenty years later, no other computer language comes even close to the level of versatility Java offers today. Engineers who grew up with Java got progressively exposed to a large number of technologies which allowed them in turn to grow their own career and eventually become Architects or CTOs.
Despite Java’s undeniable success, something weird started to happen somewhere between the releases of J2EE 5 (2006) and JEE 6 (2009). People started to label Java as “heavy”, “complex” and “hard to learn”. Sure, the fact that Oracle bought Sun in 2010, adding fear and uncertainty to the future of the platform, did not help, but this trend started well before the acquisition. Learning Java was becoming increasingly hard for beginners. In my opinion, this doesn’t speak ill of Java, but it does raise some serious questions on how we should teach complex platforms to beginners, an issue we definitively haven’t solved yet. That said, perception is reality and interest in Java started to dwindle, despite the success of Android.
Over the last few years, countless new programming languages have appeared and are now fighting for our attention. Some are great, others not so much. The problem is that, in order to become viable alternatives to Java, specially in the enterprise, these languages will need to mature. Even if the language itself may not have to evolve significally, the plaform will need to grow. In order to solve complex problems, new APIs will have to be built and over time these platforms will inevitably become as complex as Java is now.
I am not saying that we shouldn’t try to replace Java with a better alternative because complexity will inevitably creep into any successful development platform, on the contrary. A better programming language with a strong API library could be a significant boon for developers. I absolutely believe that a better programming language can make us more productive. However, adopting a better language does not make complex projects significantly simpler. You may be able to use less lines of code to achieve you goal, avoid potential errors or even simplify the development of multithreaded code, but in the end, hard problems remain hard to solve and require experienced professionals that can design complex systems. A better language is great but it is pretty much useless if it does not have the APIs enterprise developers require.
On the desktop application front, XML seemed to be the king of the hill, with all major office suite quickly adopting XML, led by MS Office. Interoperability and readability were thought to easily trump other considerations like size and performance, given that desktop PCs, lacked neither storage space nor processing power. That started to change with iWork 13, the newest release of Apple’s productivity suite. The reason Apple adopted a new proprietary binary format for its flagship product was that it was better suited for use on mobile devices such as the iPad and the iPhone, where space and processing power are indeed a constraint. Since portability among different devices was a key requirement, XML was sacrificed on the altar of mobile adequacy. It is hard to blame Apple for their choice when you look at the numbers they presented. Files now open much faster on the iPad and that is a key factor for customer satisfaction. Microsoft may not with away from XML to a binary format immediately, because many customer built applications rely on it, but they must have taken notice.
That leaves a small space for XML in which to grow, mainly in the integration space. It is hard to envision an Enterprise Service Bus (ESB) that is not based on processing and dispatching XML documents. In this context transaction management is still important, which makes SOAP-based services much better suited for the job than alternatives such as REST services. That said, there is pressure in this space too to try to avoid XML. That means that XML’s place in IT will be reduced to the large enterprise sector, the place it was born. While XML may have seemed to be ubiquitous in the beginning, it’s disadvantages ultimately condemned it over the long run.
I must say that I am quite disappointed to see XML fail in so many spaces and applications. I still believe in its many virtues and feel that in most cases its disadvantages are overblown when the architecture of the application is properly designed with NFRs such as performance and security are properly planned for since the beginning. That said, trends and public perception are hard to fight, specially when some of the concerns are absolutely based on facts. It is clear that the use of XML in any application is no longer a done deal. There will be discussions as to what alternative is better for a particular use case. Ultimately, Enterprise Architects will have to take a decision, and that is good, because XML has become what it should always have been, just another tool on their belt.