Archive for the 'Technology' Category

 

Impromptu Rides

Feb 28, 2012 in Technology

wpid-impromptu-2012-02-28-10-13.png

Yesterday Cindy and I pushed out the new version of the Day Rides Center web site, now renamed to Impromptu Rides.

This site supports our cycling club’s program for scheduling ad-hoc bike rides. While the club (Rochester Bicycling Club) has a yearly formal schedule of rides, that schedule is put together with a workweek in mind: mostly rides in the weekend and the evening weekday hours once daylight savings starts. But many club members and other local cyclists have time during the week during the day. And so an initiative came about to help coordinate interested riders to suggest, find and join in on rides outside the regular schedule. At first this was an email send-around effort. Then last winter Cindy asked if I could help make a web site to make this easier. My answer: “Sure!”

It was a great excuse to tinker with a few technologies I have wanted to play with and learn but didn’t have a “real world”-enough project to try them out on. These were GWT (Google Web Toolkit) and GAE (Google App Engine). GWT allows one to write the browser portion in Java. GWT compiles the Java source code to Javascript and takes care of the differences between the various browsers. GAE is Google’s cloud computing platform and so that is hosting the backend of the site. GAE is what GWT is to Javascript: I can just write Java without having to worry much, or know much, about server-side computing and Java EE at all.

We launched the first version last March and then during the year added some features, fixed bugs and so on. We used this winter – not many people riding bikes – to redesign the layout, add a bunch new features and I took the opportunity to re-implement a good portion. The latter is the usual software engineering happenstance: I learned a lot about the two technologies during the year and so found better ways to do certain things, the bolting on of new features made some parts a little bloated, and to make doing some of the new things easier it required some rewriting as well.

The backend also gained a new front end client this winter: the Club Rides iOS app now plugs in too to show both the regular ride schedule and the impromptu rides. The ease of doing this shows a benefit of both App Engine and GWT: it’s all just a REST API and so the front end and the back end are nicely decoupled.

This winter I focused on getting feature parity between Impromptu Rides and the Club Rides app. The Club Rides app grabs the RSS feed from the RBC web site to display recent club news. The new version also uses a Google service to grab the weather forecast. This content is also returned as XML. Working with XML, and in general any content over an http connection, is really easy to do in Objective-C. It has always amazed me how hard, comparatively, this is in Java. The built-in parsers are memory hungry and it takes a lot of code to get the content from the http connection and then parse it. For Objective-C there’s a really nice, fast, small open source library to parse XML: TBXML. Delighted I was to discover that Julien Foltz ported it to Java!

Now, all that is needed to ingest the RSS feed is:

try {
    URL url = new URL(http://rbcbike.wordpress.com/feed/);
    TBXML doc = new TBXML(url);
    if (doc != null) {
        TBXMLElement root = doc.rootXMLElement();
        TBXMLElement channel = doc.childElement(“channel”, root);
        TBXMLElement element = doc.childElement(“item”, channel);
        ArrayList> result = new ArrayList>();
        while (element != null) {
            TBXMLElement temp = doc.childElement(“title”, element);
            […snip…]
            result.add(entry);
            element = doc.nextSibling(“item”, element);
        }
        return result;
    }
} catch (Exception e) {
     this.sendEmailReport(“AdminServiceImpl:getClubNews”, e.toString());
}

The browser’s security framework does not allow the opening of URL connections – GWT doesn’t therefore implement java.net.URL – so the above code runs on the server. The client makes a Java RPC call to the server requesting the feed, the server grabs it, parses it and passes it back as a hashmap array to the browser.

Impromptu Rides tries to determine whether it’s being viewed on a computer, a tablet or a phone. In case of the latter it displays a simpler version of the app: just the Find Rides portion. This involves interpreting the user.agent values that the browser reports. Messy stuff. The Android devices, or rather their manufacturers, could be a little nicer and more forthcoming about what category of device they are. In the end I chose to distinguish between “Safari” and “Mobile Safari” which seems to work to draw the line between computers and tablets on one side and phones (and iPods) on the other. At least for iOS and Android devices. I don’t know how Blackberry or other non-Android devices present themselves. The Impromptu Rides site also knows about the regular RBC schedule and so together with the mobile device support this saves me needing to do an Android version of the Club Rides app.

As you can see from the code snippet the server-side code sends me an email when something is amiss. I quite like that. I can leave the application running by itself without needing to keep an eye. When something unexpected happens it sends me a little email.

When I started playing with GWT last year, I had to smile. When Google first released GWT I was working at Sun. We, JavaSoft, were not amused. This was not Java. What Google did was Wrong and Bad, how dare they! Now, as a software developer I find GWT great. Google directly addressed a developer need and a niche in the available tools at that time: writing in a well-known high-level language, no need to learn Javascript, shielded from (most) browser-specific stuff, zero administration and no plug-ins required.

A quick summary of the new features:

–        Elevation profiles for most, known, routes

–        Weather forecast for the starting location

–        Recent club news

–        When scheduling a ride, include a link to MapMyRide, BikeToaster, etc for Garmin Edge bike computers

–        “Remember me” for ride leaders and admin

–        Adjusted layout for iPhones and Android phones

–        and an About page which explains what Impromptu Rides are actually about!

So, have a play with it and join us on some of our rides!

Boston – Cambridge

Oct 08, 2011 in Life, Technology

wpid-Roomwithaview1-2011-10-8-10-59.jpg

Photo gallery

Last weekend I was in Boston: an Azure bootcamp event at Microsoft’s R&D center in Cambridge. It took two attempts to get there: our flight Thursday afternoon got canceled and so instead we flew over early Friday morning.

I had been playing with Azure (Microsoft’s cloud computing platform) for a few weeks and this event was a good opportunity to boost that learning effort, hear from other developers what they’re doing with it. The trip proved itself worthwhile within the first half hour with Bill Wilder simplifying the distinction between web roles and worker roles down to one sentence: “a web role is a worker role with IIS enabled”. That’s it. Many of the Azure books manage to fill many a confusing chapter trying to explain the purpose of each.

Some areas of Azure are very impressive: the behind the scenes replication of data for example. Some areas need improvement. Diagnostics for one. The pricing model especially is way too complex. It’s very hard to understand what your real costs are going to be. Some areas are intriguing. The pricing and limitations of SQL Azure versus Azure Table Services: Microsoft seems to want you to use table services where and when you can, and SQL Azure only when you really have to. I suspect their reasoning is somewhat similar to Google with GAE and BigTable: the ability to do optimizations behind the curtain.

Friday evening Mark and I had dinner at Villa Francesca, a very nice Italian seafood restaurant in North End. Together with the dinner with Rich the next evening at Blue Room in Cambridge it made me think whether it is time to move back to a real city. I like living in Rochester (or Webster to be precise) but the culinary scene is not the same…

Originally Rich and I planned to have dinner on Thursday. Saturday evening I had hoped to meet up with several of my fellow Sun alumni but that didn’t come together. I sent out an email invite a few days before. I do this as well during visits back to Amsterdam and California: proposing an evening and checking who’s available. This always works out in those two locations but here it failed miserably. Not one positive response, just two declines. I am quite disappointed about that. However, in the end still had a great evening. As I didn’t make it to Boston until Friday morning, Rich and I moved our dinner to the Saturday evening.

My flight Sunday wasn’t until 5pm so I had much of the day to be a tourist. From the hotel I first walked over to MIT’s campus near Cambridge Center. Lovely new architecture. From there back to Charles river, across Longfellow Bridge zigzagging through the little streets until I came to Boston Common. From there down State Street down to the wharf admiring the old stores converted into beautiful apartments. Then zigzagging through North End back past the Museum of Science to Cambridge and the hotel. With stops at Starbucks and Boston Beerworks as needed. The photographic proceeds of the hike are in the gallery.

At the airport I learned that the flight back to Rochester was delayed. I checked the FlightTrack app on my iPhone: this flight gets canceled 17% of the time. That’s high. Hmmm. US Airways pushes back the departure time at 20 minute intervals. I hate that. It locks you down at the gate. In the end we boarded two hours late. I much prefer that they tell me straightaway that the delay will be that. Then I can go do something: have a drink, have dinner, or just leisurely wander about. Back in Rochester it was much colder than in Boston, making searching for sweater and rain jacket the first order of business.

Windows Azure – first impressions

Sep 12, 2011 in Technology

The last few weeks I’ve been exploring Windows Azure, Microsoft’s cloud computing platform.

This turned itself into a learning experience on several levels:
1. I haven’t done Windows software development in years
2. Getting to grips with MS’s development model vis-a-vis Java and iOS/Objective-C
3. Contrasting Azure against my experiences with Amazon Web Services and Google App Engine (GAE)

The second item has been intriguing in its own right. Windows developers – the way they develop software, the way they talk about it – do seem rather different from both Java and iOS developers. Some of that is inherent to the technological differences but part of it is culture. Certainly in Java, but also in iOS, a lot of software development involves pulling in open source software for various plumbing pieces (xml parsing, graphics manipulation, network abstraction, persistence abstraction, what have you). There’s a plethora of public projects to stand on the shoulders of: Ruby on Rails, Tomcat, Glassfish, Spring, Hibernate, KissXML, the list goes on. Many developers use different languages for specific pieces of the overall product: php, python or javascript on the front-end; JRuby, Groovy, Java (of course) on the other tiers. Then there are open source products such as Puppet to help you manage the (virtual) datacenter and the deployment of your product.

On the Windows side there doesn’t seem to be that same rich diversity and choice. I use the verb ‘seem’ on purpose: I may well be wrong and discover that as I spend more time with this platform. But for now it seems that Windows developers are missing out on the cross-pollination and opportunities to learn from each other that I see in the Java environment and even in the equally closed (as compared to Windows) iOS development environment. Maybe that is good blog entry of its own?

But back to Windows Azure itself and my experiences so far.

Most tutorials, books, blogs and sample code I found approach Azure from the perspective of a web site: serving up web pages, storing and retrieving the information that comes along with it. And many assume Windows (a PC, a phone, a server) on both sides of the virtual wire.

Regarding the latter, it is a function of the internet that you don’t know (or should not know/assume) the particulars of what is connecting to you. One entertaining result of this Windows-centric view is that while much of Azure is exposed as REST services, most of the sample code just make .NET method calls. Thus obfuscating the benefits of the platform-agnostic API that Azure does expose.

Regarding the former, that – a web site – is not the context I am looking into Azure for: instead of web pages served up, it is services that need to be served up to which different kinds of mobile devices will be connecting.

Azure distinguishes between web roles and worker roles. A useful simplification is to see a web role (or the collection of web roles in your project) as the web server and the worker roles as the background processes performing the computation, data mining and so on. This distinction is still something I’m getting to grips with: it seems you can do in a worker role what you can do in a web role and vice versa. So what really does this architecture give you?

This brings up a broader point. Design patterns, or best practices. In my project I wanted the mobile devices to communicate with my Azure project through a REST API. I know how to do this in Objective-C, and in Java. But how do you expose a REST API in C#/.NET and how do you receive GET, POST, PUT messages Azure-side? Much reading on MSDN, StackOverlow.com, other developers’ blogs showed that there are many ways to do this: as a WCF service, or use ASP.NET, or ASP.NET MVC, or…
Similarly to the question of where to keep the project’s data, there are many answers: local storage, Azure table or blob services, Azure SQL, or…

I admit that for sure in the first two weeks I had a hard time separating the trees from the forest. Now, a month or so further into the journey I have a much better feel for Azure’s application model but it was not easy arriving there from outside the Windows world.

There are two areas though where I really like to see improvements: deployment and diagnostics.

Currently my project consists of a fairly simple worker role that exposes several entry points via a REST API and maintains two tables. Still, a build & deploy from Visual Studio takes 10 minutes. That’s just an eternity in a develop-build-deploy-test cycle. In comparison, from Eclipse I build and deploy a much larger GWT+GAE project in just a minute or so. I am curious what the deployment time will be once the effort grows to the projected several worker roles, Azure AppFabric bus, certificate store and more.

Sometimes after some development my worker role won’t instantiate and Visual Studio ends up in an endless cycle of creating, starting, stopping, creating, starting, stopping cycle. Yes, I can retrieve information about why it is unsuccessful via diagnostics but I had really expected Visual Studio to be able to just get that for me: it knows the worker role failed to deploy so it should really also be able to get me the failure.

To get trace and other diagnostic information from my Azure project involves sprinkling trace calls throughout my code, configuring and setting up the diagnostics framework, transferring the collected analytic data to storage, and then retrieving that data. In my experience so far this is an error prone process where I sometimes seem to spend as much time trying to have the right data collected and then accessing it as I spend fixing the particular bug I’m trying to use diagnostics for. Together with the aforementioned deployment situation this makes for a slow and rather tedious development/testing experience. I miss the end-to-end debugging I can do with GWT+GAE and I miss the GAE dashboard where I can see in one glance the main issues for my app, its resource usage and its data store.

So far I like that Azure makes much use of standard web methodology – REST most notably – and that existing .NET applications should migrate quite easily to Azure thus gaining cloudiness. But that last point I see also as a weakness: little reason for non-Windows deployments to migrate to Azure, GAY and AWS will feel much more natural.

I’d love to hear your experiences with Azure. I’ll follow up with future entries as the project progresses. Possibly the next installment quite soon as later this week I’m attending the Azure bootcamp in Cambridge, MA!

Club Rides 2011

Mar 08, 2011 in Cycling, Technology

wpid-clubrides-iPad1-2011-03-8-10-18.pngThe new version of my bike ride scheduling iPhone app, Club Rides, is up in the iTunes App Store.

It has native support for both iPhone and iPad. In addition to the new schedule for 2011 there are several other enhancements:
– Faster launch time
– Displays the club’s RSS feed for club news
– Share your favorite rides via email (facebook and twitter to come in an update)
– Send the rides you plan to do directly to your calendar
– Tap the ride leader’s phone number to call

To enable posting your rides to the calendar on your device, tap the Settings icon in the top right corner of the screen and select which calendar you want to use.

On iPhone to show a map with the starting location of the ride, tap the starting location in the ride view.

Club Rides comes preloaded with Rochester Bicycling Club’s schedule. It supports other clubs, like Northern California’s Western Wheelers, as well. And it can support your club by using the customization guide. If you like to make Club Rides applicable to your cycling or hiking club then I would be happy to assist you.

(this post is a little late – couldn’t log in for days – called in my host’s customer support and lunarpages came through with flying colors – thanks guys!)

These are some of my favorite things

Dec 28, 2010 in Technology

Since doing iPad application development I am noticing an interest I never really had before: an interest in how other developers designed and laid out their apps.

Sure, I’ve been impressed with other people’s apps before but then, especially in the case of desktop apps, it was mainly a technical intrigue: the pixel manipulations in Photoshop, all the lovely fractal work in Bruce, and how on earth can Shazam figure out any song from such a 10 second sample!?

But with iPad I find I am very intrigued by how fellow developers use the screen real estate, landscape vs portrait, the various gestures and swipes, the user interface controls. And I have a great time browsing through the iTunes App Store and collecting apps.

Here are a few of my favorites.

wpid-cinetap-400-2010-12-28-17-22.png

Cinetap is a front-end to Netflix. There’s a Netflix iPad app too but Cinetap is more visually pleasing to me. I like the bookshelf presentation. You can browse Netflix’s library, manage your queue and play movies (for which it launches the Netflix app).

wpid-MyFriends-400-2010-12-28-17-22.png

Sobees’ My Friends is a similar concept: it’s a Facebook front. A favorite for two reasons. First of all, Facebook hasn’t yet done an iPad app themselves. And perhaps more interestingly, My Friends gives you an entirely different view upon the same information by using a magazine or newspaper approach.

The next two are proof that all apps must have a cloud feature.

wpid-evernote-400-2010-12-28-17-22.png

Evernote is my favorite note-keeping app. I have it on my MacBook, my iPhone and the iPad. And they all talk to each other! Notes and updates made in one place show up in the other.

wpid-kindle-400-2010-12-28-17-22.png

I like how Amazom.com made use of the differences between their Kindle device and the iPad in how your books are presented. But what I like most of all is that I can read a book during lunch at work on the iPad and then continue reading it on the Kindle at home in the evening. It will ask me “hey, we noticed you’re further along in the book, shall we go there?” Well yes!

wpid-imdb-400-2010-12-28-17-22.png

IMDB is a great example of making things look good on and iPhone and on an iPad. The developer makes great use of the extra real estate on the iPad vs the iPhone. Beautiful layout. And yes, Once Upon A Time In The West is my favorite movie.

wpid-pulse-400-2010-12-28-17-22.png

Pulse is a newsreader. Most newsreaders are very list-oriented and linear. Pulse shows that needn’t be and I think their horizontal presentation works very well. It enables the user to swipe up and down to check different sources, left and right to see more of the same source. It also shows that all blog entries should have an image!

wpid-clinometer-400-2010-12-28-17-22.png

Not entirely sure whether I will use Clinometer much but it is gorgeous! Gives a whole new view upon the world of waterbubble tools! And it makes me want to do something with the accelerometer in my iPad!

Speaking of favorite apps and cloud functions, Shazam is the former but doesn’t have the latter. I have Shazam on my iPhone as well. The tags I create and collect on one shall never meet the others. So sad.

But, if you were as puzzled as I was on how Shazam does its magic then the answer is here.

iOS – Objective-C snippets

Dec 17, 2010 in Technology

wpid-xcode-2010-12-17-11-38.pngA quick collection of code snippets or the joy of navigating a sometimes quirky SDK.

Setting a pattern as background to an UIView, a category method I added to UIViewController:

– (void)makePatternedBackground {
        UIImage *tile = [UIImage imageNamed:@”viewBackground-tile.png”];
        UIColor *pattern = [[UIColor alloc] initWithPatternImage:tile];
        [self.view setBackgroundColor:pattern];
        [pattern release];
}

The png image is a 16×16 pixel pattern created in Photoshop.

Rounded corners for images (or any UIView subclass, really). Don’t forget to import QuartzCore.h:

        UIImageView * icon;
        ….
        icon.layer.cornerRadius = 10;
        icon.layer.masksToBounds = YES;

Improving code reuse in an universal app for iPhone and iPad. This one has been bothering me for a while. Often you’ll significantly change how you present certain information on an iPad with its much larger real estate than on an iPhone. In that case you do your best MVC separation. Table Views are easy: you take care of the differences in your UITableView subclass. But there are some cases when the views will be largely similar. How then to avoid duplicating everything in two nib files and two UIViewController classes? Make a superclass that does most of the work and which contains all the IBOutlet properties that are the same between the two presentations. Subclass the superclass for iPad and for iPhone, and make a nib file for each. The superclass will have all the manipulation code while the two subclasses only have to take care of the small differences in displaying the view on iPad vs iPhone.

Notifications are your friend! NSNotification and NSNotificationCenter really help with code reuse between iPhone and iPad as well. With iPad’s UISplitView you often have both the table and a detail view representing a table cell’s drill-down visible and active at the same time. The delegate pattern breaks down in this scenario: multiple places in your app are interested in some events at the same time: send NSNotification messages from your data model object and have interested parties add themselves as observers.

And one more on improving code reuse: don’t make your UIViewController class the delegate object and datasource object for UITableViews it contains. Instead put that code in a separate object. Then you can set that object as delegate and datasource to the table in your iPad view and in your iPhone view.

Professional Networking

Aug 31, 2010 in Technology

One of my most popular talks, and a favorite of mine to give, is “You started an open source project, now what?” It aims to give advice on how to attract developers and others to your project, and how to chase them away – a practical hints and tips session with real world examples. This morning while going through my LinkedIn inbox I realized there is a variant on this topic but then specific to professional networking.

Previously I wrote about the professional networking environment in Rochester and I’ll use some of the organizations to illustrate my points.

Whether your networking opportunity today is online (eg LinkedIn or Plaxo) or in person (eg a Digital Rochester gathering) the first step is to be prepared.

For an in-person event come with business cards an practice answers to questions you can predict: what do you do? what are you looking for?

It’s very easy and cheap these days to have cards printed. There’s really no excuse to be without. When we meet I will try to remember your name, really, I will try. Having your card makes it much, much more likely that the next day I will remember you and send you an invite to connect on LinkedIn. I will certainly not remember your email address. It also helps me network on your behalf. At a Digital Rochester gathering I spoke to a person looking for a software developer with particular skills. That same evening I run into a friend with those skills. Neither had a business card. You know, you’re making this hard for me!

At a conference last year in San Jose, CA I met up with two ex-Sun colleagues. One started a new venture, the other was still looking. She and I trying to help the person still searching:
“What are you looking for?”, we asked.
“Oh anything really,” he responded.
“Yeah, but what in particular do you what to do?”, we tried again.
“Anything. If you look at my last couple of jobs at Sun and Apple I never had a clearly described job and did whatever needed doing.”

The above conversation also happens a lot in open source software projects: “Where do you need help?” asks a newcomer, “anywhere, pick any area you like” answers the project leader. This is miscommunication. In the San Jose conversation our friend tried to be helpful to us by not bounding his interest. The result of course is the opposite: he’s not giving us anything to go on. You need to make choices.

Now, one can also be too abrupt. At an August Group event someone came up to me; we don’t know each other and almost the first question was “are you hiring?” I was not so my answer was “no” and the conversation, such as it was, ended. An opportunity lost. I may not be hiring but maybe I know who is or maybe I’ll be hiring in the future. But I don’t know who you are or what you’re looking for. First, start a conversation. Don’t throw your elevator pitch at me right away but start with a question. Once we’re talking either I will ask you what you’re looking for or there will be a natural moment to perform your 20 seconds sales pitch.

For online networking it is equally important to be prepared and to observe of some of the same points. I’ll stick with LinkedIn in my argument. Make sure there’s at least some professional background info in your profile: what have you done, where have you been? And, if you set email or phone as preferred means of contact then make sure an email address or phone number is in your profile…

LinkedIn is a fantastic tool to reconnect with old colleagues, friends from school or university as well as making new connections. When I receive an invite to connect on LinkedIn from someone I may have worked with in the past then it greatly helps if you put a little context in the message rather than LinkedIn’s default “I’d like to add you to my professional network on LinkedIn.” If you were my boss for seven years then you don’t need to elaborate how we know each other. In all likelihood I do remember. But if we knew each other only casually or we worked on a (short) project quite a while ago then personalizing the invite and giving me some context really helps. Don’t make me hunt for it. Give me an impression that you value re-establishing our relation by investing a little more time than the three mouse clicks to send out the plain vanilla invite.

And your LinkedIn profile should have a good picture of you. When I am searching to reconnect it can help me decide your profile is the right John Smith I’m looking for. And, when we meet in person for the first time it helps us recognize each other at Starbucks, the Bagel Bin, Spin Caffe or wherever our favorite coffee spot is.

Share with me your hints and tips regarding successful networking.

Agility

Aug 25, 2010 in Technology

wpid-agile-2010-08-25-11-18.jpgThe summer of buzz words: Cloud computing, tablets, agile, scrum … others?

I plan to write about the first one in a next entry. Today I want to focus on Agile (see its manifesto). Sometimes when looking around on industry blogs, software team mission statements, job descriptions two trends seem to take hold: software development organizations need to be agile in order to attract top talent, software developers need to master agility in order to be hired.

And yes, I am also a proponent of agile development methodologies including some of its variants like Scrum (see Jeff Sutherland’s blog) and Extreme Programming. One reason why agile-like methods are taking hold not only in fast-paced startups but also the stately corporations is that they build upon the lifestyle of successful open source software projects: “release early and release often.”

In past years Big Bang approaches were common. Companies may have attempted to breakdown scope and complexity of projects through variants of the basic iterative waterfall models but still the project would always work towards the big, unique delivery of the final product to the customer.

Successful execution of an agile approach can achieve several results:

  • the product development as a whole becomes more transparent and manageable
  • developers and customer understand each others roles and responsibilities much better
  • the customer sees continuous progress towards a goal
  • makes adjustment to changing circumstances easier
  • more opportunity to recognize and foster talent

Perhaps an unfair advantage Agile has over the traditional models of the past is that we now benefit from the web: abundance of information available through blogs and articles of our experiences with applying this approach.

As illustration I offer a few of favorite blog posts. Some over guidance, some offer a word of caution:
Sumeet on “Agile is not…”
Adam on “Definition of Done in Agile development”
Venkatesh on “Agile Testing”

In case of Sumeet’s post I agree with his concerns regarding the focus on colocation. Apache Software Foundation, Eclipse and others have shown that colocation is not a requirement for success. Even more, a strict focus on colocation limits your organization’s access to available talent. Wiki’s, social media tools, video chat and many more modern communication tools all help break down geographical distance and help create rich communication, collaboration between dispersed team members and even those who are near.

3 Degrees of Open: Source vs Core vs Standards

Jun 17, 2010 in Technology

Turning a parking lot conversation with my friend Rob Phipps into a blog post.

The conversation took place after Oracle’s Cloud Computing event here in Pittsford. Rob observed that only very little open source software featured in the 6 hour event: Xen (the underpinnings of Oracle VM), OpenSolaris (just a mention, didn’t appear to feature in the strategy) and Linux. For the rest, or for all practical purposes the whole software stack, is Oracle’s own. Enter a philosophical discussion whether open source software decreases vendor lock-in, whether open standards do that, and whether open source software supports or hinders the emergence and adoption of open standards.

Let me state upfront the conclusions we reached and then follow up with the argumentation that got us there:
-        Open Standards decrease vendor lock-in;
-        Open Source Software does not diminish customers’ lock-in to the products of the big vendors;
-        Open Source Software is damaging to the adoption of Open Standards.
And thus, customers should demand that their vendors collaborate on Open Standards first rather than Open Source Software.

Open Source Software (OSS) is software where the source code is licensed so that anyone can see that source code, make modifications, and redistribute the result. Roughly speaking there are two main families of OSS licenses: GPL-style and BSD-style licenses. GPL-style licenses require that you make your modifications available under a similar license including any software that you bundle with the software when you redistribute it. BSD-style licenses allow to redistribute the software under a different license including proprietary ones. Some BSD-style licenses require that you made modifications to the code before gaining that right while others don’t. Examples of OSS are Linux, OpenOffice, Eclipse and Netbeans.

The proclaimed benefits of OSS are many:
-        better quality as more eyes are looking (or can look) at the software
-        increases security, for the same reason
-        levels the playing field
-        protects against monopolies
-        purchase price is low: $0
-        lower development costs
-        ….
-        and, avoids vendor lock-in

Then there are ideological principles such as “software wants to be free.”

This is all great so where is the kink?

It is the key aspect of OSS regardless of the license you pick: it enables your competition. The specific license makes this more so or less so but it is always there. For most businesses it is true that recurring sales from existing customers is king. A vendor must find ways to tie its customers to itself. If the competitor has access to the same source code then the vendor must find a different way to achieve this. Many vendors achieve this by either limiting the use of OSS to supporting components or to commodity layers (eg the operating system) and keeping the core proprietary, or by so heavily modifying the open source software that the effect is the same: very expensive for the customer to switch vendors.

A new term is emerging in the industry to capture the presence of OSS within the software stack a vendor offers to customers: Open Core. This term attempts to communicate open-sourcy goodness to potential customers and to allow the vendor to present itself in a positive light. Of course it is the vendor who decides what in the core or rather in the stack is open source and what is available only under a proprietary license. I don’t know of any vendor-led open source project that delegates that decision to its community. As Simon Phipps notes in his ComputerWorld blog while this is a valid business, it has little to do with software freedom anymore. The Open Core model significantly differs from a dual-licensing model: in the latter the same code is offered under different licenses.

What surprises me in the ongoing discussion on Open Core is the absence of “Standards”. Phipps and others argue that open source software enables software freedom. I believe this is at best only part of the story. Many OSS licenses as well as communities like the Apache Software Foundation are constructed as such at least in part to enable Open Core business models.

To move Software Freedom (to use, study, modify, and distribute the software without restriction) from often a theoretical freedom to practical ability for customers Open Standards or Open Interfaces are required. Let me give a brief definition of an Open Standard: a well-defined, publicly available specification together with a publicly available test suite which implementers are required to pass. And these need to be managed by an independent organization.

In the absence of Open Standards, open source software just functions as another means for vendors to lock in their customers and making it very expensive for customers to replace one vendor’s implementation with another.

In the computer and software industry the Java Community Process (http://JCP.org) was going in the right direction for a while but ultimately still falls short. The JCP was the first to recognize that a meaningful standard needed three pieces: the specification, a reference implementation showing that the specification can be implemented, and a test suite proving the correct and complete implementation of the specification. Requiring these three pieces also positively impacts the quality of the standard: the spec is tested by the effort to create the reference implementation and the test suite, and the test suite and the reference implementation test each other. A positive feedback loopback.

For a while things worked quite well and successfully: the JCP’s presence contributed to the success of Java EE. But eventually its shortcomings are weighing it down. JCP.org is not an independent non-profit but owned and managed by Sun and now Oracle: the fortunes of the standards effort are tied to the (mis)fortunes of the corporate owner. And while the JCP requires a test suite for each specification, it does not require that this test is publicly and freely accessible. This creates two problems. First, vendor claims of compliance can’t be independently verified leading to a nontransparent trust-based system between the vendor and the owner of the test suite. Second, certain parties can be precluded from certifying their implementation (eg the years-long standoff over Apache’s Harmony project).

I don’t believe the magical “truly independent standards body” can be created. If nothing else a standard must leave certain areas unspecified in order for that standard to be commercially interesting in other words protect some ability for differentiation among implementing products. In addition creating standards is expensive especially when requiring a meaningful test suite. Some form of direct or indirect ability to recoup that investment needs to be available.

If the perfect one can’t be done then what’s the best we can do?

First of all, customer or end-user involvement is mandated. Vendors have no need to implement standards and adhere to them if their prospective and existing customers don’t require it. Without customer pull no meaningful standard will emerge. Most standards organizations are weak on this point. Even the heralded ISO and ANSI organizations are dominated by the vendors.

Open Source vs Open Standards
Vendor lock-in
Enabling competition
Market forces
Customer pull
Test suite
Collaborate on standards, compete on implementations

Reference material:
http://www.computerworlduk.com/community/blogs/index.cfm?entryid=3047&blogid=41
http://lawandlifesiliconvalley.com/blog/?p=485
http://alampitt.typepad.com/lampitt_or_leave_it/2008/08/open-core-licen.html
http://www.compieresource.com/2010/06/compiere-open-source-failed.html

Fragmentation, Android, Facebook, Crowdsourcing news roundup

May 11, 2010 in Technology

wpid-crowdsourcing-2010-05-11-09-22.jpgMy pre-breakfast news and blog browsing stumbled over a couple of developing trends. Here is a rundown and my thoughts on them.

Fragmentation, Linux, Android
Matt Asay writes “Fragmenting Linux is not the way to beat Apple.”
In this commentary piece Matt draws a comparison between today’s mobile Linux environment and the Unix wars of the 80-ies, and argues that Motorola, Google, HP, Intel, Nokia and others should look at the Linux server playbook. There companies like Red Hat, Canonical, Texas Instruments, IBM and Oracle are “working furiously to build a great core and then competing in the packaging, hardware, etc.”

Matt points out that the fragmentation could resolve itself and see Android dominating the market. Which wouldn’t be dissimilar to those Unix wars from which AIX, Solaris and HP-UX emerged as the main market players (albeit in that case fighting over a shrinking market share against Windows NT).

I agree with Matt that one wonders how many operating system variants these smart phone and other mobile devices really need. Seems like a lot of engineering investment that now can’t be used to compete against iPhone or Blackberry or Windows Mobile.

There is a difference though between the server market and the mobile market that is important here. On the server side each of the companies has direct access to the customer: the stack is already owned. The fight in the mobile market, especially since the introduction of the iPhone, is about who has access to the consumer and who owns the stack. For many years the manufacturers had to be content with a provider role to the likes of AT&T, Verizon, Vodafone who owned the relationship with the consumers, the users of the devices. Each of these companies are since working to own more of the software and hardware stack to gain leverage versus the service providers in who can access the consumer: and thus the need to each to have their own OS.

Brian Prentice’s blog entry is wonderful if only for its title: “Android – The best laid open source plans of mice and Google.”
He raises very interesting points on how recent patent saber rattling will impact IP risk assessment in open source projects, how their communities are organized and operate. It deserves a separate blog entry in response. I think though that the likes of Black Duck Software are, ehh, intrigued by the developments.

Android fragmentation really seems the coffee corner discussion topic of the day with more blog writing by Fabrizio Capobianco and GigaOM.

Facebook
Facebook has its own share of press spotlight with senators writing them letters. The topic of this attention: their continuing changing management of users’ privacy.

While I am a heavy user of Facebook and I like how it helps me stay in touch with friends, I am a bit tired of the seemingly weekly changes to the privacy policy. The Like button, the ads’ access to my profile, the sharing of my content with other applications and websites. Every week there appears to be a need to go in to my account settings and making sure I still like what others can find out about me.

Perhaps what irks me the most is that most of these changes are opt-out instead of opt-in. As a community manager myself, I strongly believe you should not expose your inhabitants this way.

The Technically Incorrect blog explores that “delete facebook account” is becoming a top search on Google.

Crowdsourcing
This leads me to the last topic of this post. Social Times posted a great interview with Chelsa Bocci of Kiva where she talks about the value of Facebook and crowdsourcing to the charity. I am a great fan of Kiva and how it allows me to lend support directly to people of my choosing.

By providing direct access to the loan profiles volunteers help edit, review and translate together with the launch of lending teams greatly increases Kiva’s reach from its 40 or so employees.

Chelsa characterizes Kiva as social investment and thus social networking tools like Facebook and Twitter are valuable. However, she alos notes that Facebook and Twitter have greatly improved the brand awareness but that is not yet clear if these social tools have increased the number of lenders. In other words it is hard to establish a conversion rate.

And with respect to our previous topic she also touches on the flip-side of Facebook’s Like-button changes: it’s value to organizations like Kiva.

stretch mark removal products
Спорт-как способ похудеть ссылка сайт кремлёвской диеты женские сайты диеты как быстро похудеть и накочать мускулы как можно быстро похудеть без проблем рисовая диета для похудения как быстро похудеть, рецепты похудения как похудеть быстро без дееты за 14 дней срочно похудеть с помошью салона красоты в казани как быстро похудеть народные рецепты как похудеть быстро и не мучить себя голодом диеты для снижения веса сайт девчат быстро похудеть на 30 килограмм срочно похудеть на 5 кг. за 10 дней алан кар легкий способ похудеть скачать бесплатно как быстро и безвредно похудеть? как похудеть быстро за месяц 10кг спомощю воды диетхудеем быстро без с упражнениями отзывы средства для похудения хочу быстро легко похудеть быстрый способ похудеть aллен кaрр срочно похудеть за три дня три колограмма быстро похудеть без лекарств худеем быстро после родов ка быстро похудеть делая клизмы сайт диета доктора аткинса легкий способ похудеть от алена карра диета для похудения из куриного мяса танец живота как способ похудеть индивидуальная диета тест худоба ру легкий способ похудеть