A couple of months ago I returned to my parents home in Madrid, Spain and was able to find some of the early code I wrote for the Apple II. I wasn’t very confident my 5 1/4” floppy disks could have survived 30 years in storage but I decided to send them to my friends Antoine Vignau (from Brutal Deluxe Software) anyway. Some of the disks were damaged but I had enough backup copies that eventually both Teacher’s Wizard and G.A.P.E. could be fully recovered. I was quite amazed to be able to load these products I created over 30 years ago on my Mac and run them in an emulator.
G.A.P.E. (Global Applesoft Program Editor) was an Applesoft editor much like Call A.P.P.L.E.’s G.P.L.E. based on an editor I used on a PR1ME micro-computer when living in Geneva in the early 80’s. I submitted this program in 1985 to a contest organized by Philips in Europe called the Holland Prize (now European Union contest for young scientists) ). Although I didn’t win, it was a great experience and G.A.P.E became my first “professional quality” software title.
Teacher’s Wizard was a tool for teachers that allowed them to easily create courseware. It was quite sophisticated for the time because it could be used with a mouse and incorporated many of the same concepts that would later be made popular by Hypercard. This program was originally developed for Edelvives, a Spanish book editor that worked closely with many schools. I later sold the rights for the rest of the world to Britannica Software.
Both programs can now be freely downloaded.
A couple of months ago, Antoine Vignau helped me recover the contents of my old HD 20SC hard drive. The disk was in very bad shape but he was still able to image it and most of the contents could be recovered. What really surprised me was that I was able to recover two programs I wrote in the late eighties.
Among the many interesting things that I found on that disk was an old NDA (New Desk Accessory) that I wrote back in 1988. If you have been using your IIGS with a French keyboard and hoped for better support for accented characters, AZERTY may help you.
The other product I was able to rescue was Jigsaw Deluxe, an improved version of my first Apple IIGS game, Jigsaw! This new version adds several new features that make it more fun.
Download and enjoy!
A couple of weeks ago I was interviewed by Mike Maginnis and Quinn Dunki from the Open Apple podcast, a monthly show about the Apple II. I had a lot of fun sharing some of the stories behind the development of SoundSmith and some of my other Apple II titles. I realize that not many people are interested in vintage computing, but if you are my age and had the pleasure to enjoy the early days of personal computing, you may be interested in listening.
Here is the link to the episode.
You may also be interested in subscribing and listen to older episodes. I particularly enjoyed the ones with Bill Budge (of Raster Blaster fame) and Mike Westerfield (The byte works), but there are many others also worth listening too.
It is hard to believe, but Sun Microsystems released Java 1.0 almost 20 years ago, on January 23rd, 1996. I was an early adopter because I was intrigued by the write once, run everywhere promise. At the time, developing for the Mac did not look like a viable career option and yet I did not like the idea of having to switch platforms. As a result, Java seemed a great option.
Java 1.0 couldn’t do much beyond producing animated content for the browser, but it was easy to learn, mostly because it had so few libraries available at launch. In fact, this was part of the excitement, as so many basic building blocks had to be created in order to allow other developers to build more powerful applications. As an example, during Java’s early days I wrote a GUI for applets (Swing didn’t exist and AWT sucked), inspired by the classic MacOS toolkit. I called it MAE for Java. It was a lot of fun.
Over the next few years, Java grew up. In 1997, version 1.1 added JDBC and the servlet standard, which paved the way for the application server era. I discovered WebLogic at the second ever JavaOne event in San Francisco and even though the product had limited capabilities at the time, it was clear to me that the concept had a lot of potential. Java was quickly becoming a solid platform and a serious contender in the Enterprise world. Over the following years, the J2EE spec (now simply known at JEE) continued to mature in order to address an increasingly large array of IT requirements (SOA, Web development, encryption, MQ integration, O/R mapping, etc.). For those of us who got on the bandwagon early, adopting these technologies was easy. We just had to learn a couple of new APIs each year, a pace which, with hindsight, now seems quite reasonable. Everything was great. So great in fact that I credit Java for developing a whole generation of IT Architects. I am of course talking about senior professionals who usually have somewhere between 15 to 20 years of experience, not the kids just hired out of school by consulting firms and labeled “Architects” to justify higher hourly rates.
So, how can someone become an architect today? It all starts by learning the right programming language. Some languages, like Visual Basic (let’s use a dead language as an example to avoid offending anyone), are great for quickly building specialised solutions, but won’t help you with your career. I for one have never met a CTO or CIO who got his/her job after a successful and gratifying career as a Visual Basic programmer. On the other hand, Java was designed from the beginning as a general purpose language, designed to build any kind of application. Sun Microsystems, which was on a mission to conquer the world, wanted their language to be used for everything, from embedded systems to large distributed enterprise applications. To achieve that goal, they enlisted most of the IT industry leaders (IBM, SAP, Oracle, etc.) to help them provide Java developers with a large selection of rich, stable and supported APIs as well as solid developer tools like Eclipse. The results achieved by this broad industry alliance have simply been amazing. Twenty years later, no other computer language comes even close to the level of versatility Java offers today. Engineers who grew up with Java got progressively exposed to a large number of technologies which allowed them in turn to grow their own career and eventually become Architects or CTOs.
Despite Java’s undeniable success, something weird started to happen somewhere between the releases of J2EE 5 (2006) and JEE 6 (2009). People started to label Java as “heavy”, “complex” and “hard to learn”. Sure, the fact that Oracle bought Sun in 2010, adding fear and uncertainty to the future of the platform, did not help, but this trend started well before the acquisition. Learning Java was becoming increasingly hard for beginners. In my opinion, this doesn’t speak ill of Java, but it does raise some serious questions on how we should teach complex platforms to beginners, an issue we definitively haven’t solved yet. That said, perception is reality and interest in Java started to dwindle, despite the success of Android.
Over the last few years, countless new programming languages have appeared and are now fighting for our attention. Some are great, others not so much. The problem is that, in order to become viable alternatives to Java, specially in the enterprise, these languages will need to mature. Even if the language itself may not have to evolve significally, the plaform will need to grow. In order to solve complex problems, new APIs will have to be built and over time these platforms will inevitably become as complex as Java is now.
I am not saying that we shouldn’t try to replace Java with a better alternative because complexity will inevitably creep into any successful development platform, on the contrary. A better programming language with a strong API library could be a significant boon for developers. I absolutely believe that a better programming language can make us more productive. However, adopting a better language does not make complex projects significantly simpler. You may be able to use less lines of code to achieve you goal, avoid potential errors or even simplify the development of multithreaded code, but in the end, hard problems remain hard to solve and require experienced professionals that can design complex systems. A better language is great but it is pretty much useless if it does not have the APIs enterprise developers require.
In an interview with the Information Security Media Group publication, White House cybersecurity coordinator Michael Daniel admits to having no practical experience with the subject matter. Daniel claims that “being too down in the weeds at the technical level could actually be a little bit of a distraction” to his job of advising the president about ongoing and emergent information security issues.
The White House filled the position with Daniel in May 2012, having previously served as the intelligence branch chief in the White House Office of Management and Budget. He believes that the lack of practical experience in the field is offset by masters’ degree in national resource planning and public policy degree. He also credits previous government experience for success in the position, augmented by his martial arts experience.
As the Electronista article states, Daniel isn’t responsible for the technical details of a fix or solution to a country-wide issue. Rather, his job is to assess the situation, and report to the president, and bring other agencies into the fold and “on the same page” about an issue. Senior fellow Jim Lewis at the think tank Center for Strategic and International Studies claims that the lack of experience doesn’t hinder Daniel’s role in the position, claiming that “Computer scientists were in charge and they did a terrible job, being lost in the weeds and largely clueless about policy. You need someone with a strategic point of view and policy skill to make progress.”
Every time I read something like this, I get extremely upset. This theory mostly assumes that there are only two types of persons in any organization, the leaders who can handle any kind of situation and the specialists whose only responsibility is to execute the master plan. This is great for many executives, because unless they are found to be personally responsible for a major screwup, it shields them from any accountability. If something goes wrong, it is never because the plan was flawed in the first place, it is due to poor execution, which can almost always be blamed on managers who are lower on the organizational chart. The problem of course, is that this is simply not true. Most issues in a company, specially in high tech, can be directly traced to a lack of a clear vision that can be communicated to the employees for proper execution. Execs who do not understand their product or market in detail are unable to produce a winning growth strategy, it is that simple. In this context, former GE CEO Jack Welsh is often mentioned as an example of a leader who didn’t need to be an expert in washing machines to turn around a very complex, diversified company. However, there are few Jack Welshes in the world and it is easy to find many examples of successful leaders who were experts in their markets, specially if we only consider fast growing markets, like cyber security. My personal opinion is that Jack Welsh, who undeniably achieved great success at GE, is now used as an example by mediocre executives to try to justify why not knowing anything in their respective fields is not a problem, and this is simply wrong.
MBA programs from prestigious universities are in large part to blame for propagating this idea that lack of experience is not a problem. Business professors usually tell their students from the beginning that they are destined for greatness and that they will learn how to make decisions by learning from the experience of great company leaders. However, unlike what many execs seem to believe, leadership is not about taking decisions by choosing one of the options presented to you by your team. It is about setting a direction and executing on a plan that you have designed. That requires both experience and guts. I don’t know about Daniel’s guts but he clearly lacks experience in cyber security, a skill that is extremely hard to acquire, and that will significantly hinder any attempts he makes to define a “strategic point of view”. Therefore, from my point of view, he is clearly a poor choice for the job. That doesn’t mean that his government experience is not important, it clearly is, but he should at the very least have recognized this shortcoming and explained how he planned to address it, instead of simply dismissing his critics. President Obama is accountable for having chosen Mr. Daniel for this position, but he also shares part of this responsibility. Leaders, to be successful, need to have zero tolerance for mediocrity and that includes their own. Those who accept a leadership position need to be convinced that they are a good fit for the job and that they will be able to deliver results. Integrity begins with an honest introspection exercise to find out if you are the right choice for the position.
This has been quite an exciting week. Apple has introduced over 4,000 new APIs for both OSX and iOS and most developers have been raving about how the “new” Apple led by Tim Cook has changed. They claim that the company now listens more to their customers and use the fact that iOS will support custom keyboards, allow for the use of the fingerprint reader and offer inter application communication mechanisms as proof that things have changed.
Frankly, I am not convinced that much has fundamentally changed. When Apple launched their Rip, Mix and Burn campaign in 2001, they were clearly listening to what customers wanted at the time. In order to do it properly, they had to plan for their vision, which included buying the application (SoundJam MP) that would eventually become iTunes and add the capability to easily burn CDs, which took some time, but they eventually released the product they wanted.
What happened this week was similar. Apple may have wanted to offer the possibility to install alternative keyboards for a while, but it took some time to deliver the capability in a secure form. What is so dangerous about alternative keyboards? Well, imagine that the keyboard logs all your key strokes and send them to some server in Ukraine. All your passwords, credit cards numbers would be gone. So, what is needed to make sure that supporting alternate keyboards is safe? Well, one way to do that is to avoid using the keyboard to enter credentials or credit card numbers in the first place. That is something that Apple has solved by releasing iCloud keychain in iOS 7 and by opening the use of the fingerprint reader in iOS 8. The other thing to do is to forbid internet access for the keyboard app if the user chooses to do so. That was also announced by Apple as part of iOS 8. It is likely that by the time Apple releases iOS 8, all new iOS devices will include a fingerprint reader and as a result, should be well protected against malicious keyboard apps. As you can see, opening iOS to support alternative keyboards is not something totally new that came out of nowhere, it is the result of careful planning and making sure that everything is in place before launching a new feature.
Swift, the new programming language launched by Apple at WWDC is another interesting example. This new language has been in the works for about four years now. It is a modern language with a lot of new cool features, but I would hardly call it revolutionary. What is interesting about Swift is that, as far as I know, it is the first language designed from the found up to make the use of an existing library much easier. Normally, a language is designed to solve a particular problem that other existing languages cannot handle well (multi-tasking, security, etc.). However, Swift seems to be designed solely for the purpose of giving the Cocoa framework a new lease on life. By basing variable types on Cocoa objects (for example strings are NSStrings) and hiding the complexity of handling structs, Swift makes it much easier to write code for Apple platforms without impacting the huge investment made by Apple and Next on Cocoa over the last 30 years (NextStep was launched in 1989). This makes a lot of sense, because it preserves Apple’s biggest asset while giving us developers what we want. Swift is therefore in that sense evolutionary and not revolutionary. It is the result of a plan launched years ago with the adoption of the llvm compiler, and the launch of Objective-C 2.0, and if Apple is really planning on eventually moving their Macs away from Intel, applications written in Swift will make the transition transparent for application developers.
WWDC 2014 was a great event because it saw the fruition of many initiatives started by Apple years ago, not because Tim Cook just started to listen to their customers and developers but because Apple seems to be accelerating the delivery of features that result from a carefully crafted plan. The success of Apple depends on maintaining a clear long and medium term plan to deliver their vision, as they have done so far, and not on delivering a long list of short-sighted features.
Once again, I won’t be able to attend WWDC. I am very excited though that on Monday we will be able to see what Apple has in store for us for the next few years, because I believe that this event will be more about announcing the foundation of things to come than actual products we will be able to buy in June.
From a developer perspective, Xcode 6, iOS 8 and OS X 10.10 should include enough new functionality to keep us busy for the next few months. Support for larger iPhones will probably translate into a lot of work to prepare old apps for the official launch of the new devices. Something similar is to be expected for Mac developers who will have to deal with a flatter overall design including updated controls. I certainly hope that the changes are more than skin deep, because while appearance is important and having a uniform look and feel across Apple devices can make the user’s life much easier, when I use my Mac, it is all about what I can do with it, great looks come second.
What I would like to see announced at WWDC are improvements around iCloud, namely lower pricing and APIs for Windows, Linux and Android. Writing a cross-platform app that syncs data among devices is not very difficult, there are many scalable document based data stores than can handle this task (Cloudant comes to mind). The problem is persuading customers to pay for the service. Apple on the other hand can do that much more effectively because they already have a large customer base that use the free service or pay for iCloud once a year and get a lot of value by using the service with not one but multiple apps. The value proposition is much better. Sure, there are competing services, like Dropbox, but I like the Apple option better because I can easily assume that all Apple customers have an account.
On the hardware front, I do not have many expectations. Apple has been unable to keep hardware leaks from happening in China in the past and right now we haven’t seen enough credible information to believe a product launch is imminent. If there are any announcements it will be like last year’s MacPro, a simple preview with a launch date, to generate pent-up demand.
I have no doubt that WWDC 2014 will all be about announcing the infrastructure for things to come, namely new services that will be available only to customers with modern hardware (fingerprint reader and the M7 processor as well as future devices) which will generate a need to upgrade old devices and leave the competition in the dust for a while. Apple has had several years to build the infrastructure and plan for this moment. On Monday we will finally understand what Apple has been working on. We may not understand the full reach of these announcements until Apple launches their new devices in the fall, but it will be an exciting event. I will be spending a lot of time on the treadmill next week, watching the WWDC session videos on my Apple TV.
Yesterday, after working on the project for over a month, I finally published my redesigned home page. It was long overdue, the previous design looked dated and was not designed to support mobile devices. The new site sports a modern responsive design and I really hope you enjoy the enhanced navigation experience.
So, I started by looking for a tool that would allow me to keep design activities to a minimum and focus on content. After evaluating several options, I finally settled on RapidWeaver, a Mac application developed by Realmac Software. This may come as a surprise to many, as RapidWeaver by itself is a quite unremarkable piece of software. It would be quite easy to argue that at US$79.99, it is way overpriced for what it does.
For all of its limitations, though, RapidWeaver provides one great feature, it offers extensibility through the use of plugins. One of these plugins, Stacks 2, achieves an incredible feat, it converts a mediocre product in an unmatched web design tool that is powerful, flexible and easy to use. Sure, you will probably have to invest another US$100 in plug-ins and stacks (take a look at the stacks developed by Joe Workman) to achieve whatever you want to do, but at this point you will be able to create almost any kind of complex web site with almost no effort.
I can’t really emphasize enough how pleased I am with this RapidWeaver based solution and I really recommend it to anyone who wants to develop a professional looking site without going through the hassle and cost of contracting a pro.
You cannot imagine how glad I was to see the controversy that has followed the indefinite suspension of Phil Robertson from his show, Duck Dynasty. For those who are not aware of what happened, A&E, the network that produces the show, decided to remove Phil, after learning that he had given an interview to GQ magazine where he made a number of statements against gay people and also said a couple of things that can be perceived as racist. This is standard procedure and we have seen this happen in multiple occasions. However, this time, something different happened. People started to rally behind Phil, criticizing the decision, defending his right to free speech. Let me be clear, I do not share any of those beliefs, but I am sick to live in a world where those who mean well, try to stifle the freedom of speech of those who think differently.
Back when I was in High School, we had debates on a variety of topics. We discussed the role of prisons, the death penalty, abortion and many other controversial topics. I remember a girl, raised by a well intentioned hippy family, who had a big heart and was wicked smart. We had nothing in common and clashed many times, but we had very interesting discussions and I was really interested in trying to understand her point of view and where she was coming from. It wasn’t just about arguing and winning a stupid debate, it was about trying to understand each other and address the issues. The debate worked because everyone was free to speak their mind, without consequences.
In our modern world, liberty of speech is recognized in most countries, but there can be serious consequences if you say something that is not politically correct. Careers can be destroyed, companies shut down. This is the equivalent of a crime that cannot be tried in penal court, but can be accepted by a civil court. It doesn’t make any sense. Freedom of speech should mean exactly that, freedom to express what you believe, with no restrictions or consequences. Look, I am not naive, I understand why we are trying to muffle some opinions, we want to avoid going from hate speech to violence, and I understand this is a risk. That said, I do not believe that problems are solved by avoiding speaking about the issue. We need to hear what the people have to say, listen actively and act decisively to solve the problems. Yes, some of those problems are very complex, but avoiding the issue will not make it go away, the pressure will keep mounting and eventually we will face an explosion.
Right now, pressure groups are effective because there are few targets that they need to monitor, basically tv and radio networks, large advertisers, etc. I had hoped that the Internet would change that by increasing the number of content providers and making control much harder. However, I may have been too optimistic. Now that we know that the NSA is monitoring all that is said on the Internet, we are all compelled to share politically correct views if we want to avoid trouble.
Maybe it is just the troll in me speaking, but I really would like to live in a world where there is true freedom of speech.
On the desktop application front, XML seemed to be the king of the hill, with all major office suite quickly adopting XML, led by MS Office. Interoperability and readability were thought to easily trump other considerations like size and performance, given that desktop PCs, lacked neither storage space nor processing power. That started to change with iWork 13, the newest release of Apple’s productivity suite. The reason Apple adopted a new proprietary binary format for its flagship product was that it was better suited for use on mobile devices such as the iPad and the iPhone, where space and processing power are indeed a constraint. Since portability among different devices was a key requirement, XML was sacrificed on the altar of mobile adequacy. It is hard to blame Apple for their choice when you look at the numbers they presented. Files now open much faster on the iPad and that is a key factor for customer satisfaction. Microsoft may not with away from XML to a binary format immediately, because many customer built applications rely on it, but they must have taken notice.
That leaves a small space for XML in which to grow, mainly in the integration space. It is hard to envision an Enterprise Service Bus (ESB) that is not based on processing and dispatching XML documents. In this context transaction management is still important, which makes SOAP-based services much better suited for the job than alternatives such as REST services. That said, there is pressure in this space too to try to avoid XML. That means that XML’s place in IT will be reduced to the large enterprise sector, the place it was born. While XML may have seemed to be ubiquitous in the beginning, it’s disadvantages ultimately condemned it over the long run.
I must say that I am quite disappointed to see XML fail in so many spaces and applications. I still believe in its many virtues and feel that in most cases its disadvantages are overblown when the architecture of the application is properly designed with NFRs such as performance and security are properly planned for since the beginning. That said, trends and public perception are hard to fight, specially when some of the concerns are absolutely based on facts. It is clear that the use of XML in any application is no longer a done deal. There will be discussions as to what alternative is better for a particular use case. Ultimately, Enterprise Architects will have to take a decision, and that is good, because XML has become what it should always have been, just another tool on their belt.