As Chief Scientist for PRICE Systems, provider of world class cost estimating tools and services, I spend my days researching new technologies, processes and techniques associated with the development and production of hardware, software and systems. While the end goal is to develop an understanding of how various factors impact cost, the journey requires that I learn new things every day about how the world works. I frequently publish or present these findings to the cost and measurement communities in various venues around the world. The primary focus of my research (and my main passion) is all things related to Software Development and Information Technology although this has not precluded occasional forays into topics such as hardware manufacturing, systems engineering and composite materials. The goal of my research is arming cost estimators with the best technology available to address their estimating challenges and help them achieve estimating accuracy. Read more about Arlene Minkiewicz
Follow me on twitter
I recently attended a webinar presented by David Herron of the David Consulting Group (DCG) discussing a recently released specification for the automation of function point counting (available on the Consortium for IT Software Quality (CISQ) site . Function point counting is a process through which software ‘size’ is measured by the amount of business value that the software delivers to the end user.
Function Point counts are thought by many to be a far superior means of measuring software ‘size’ because they are technology neutral and not impacted by factors such as programmer style. A major impediment to wholesale adoption of Function Point counting has been the fact that the process is manual, tedious and takes a lot of time. Source Lines of Code (an alternative means of software measurement) has many critics and yet many still tend to use them as their primary means of measurement because software can be developed to count them consistently on finished software applications. To achieve consistent Function Point counts one must study the counting practices or standards for function point counting (there are actually 5 standards for different types of Function Point Counts – but we’ll cover that some other day!). The International Function Point Users Group (IFPUG), focused on the IFPUG Function Point Counting method (the most widely used method of the 5 methods available), has developed and maintains a counting practices manual. To become a Certified Function Points Specialist one must pass an exam that IFPUG administers.
In a previous life I thought that being a Certified Function Point Specialist would be a useful skill for a software estimation professional like me. I studied, took the exam and was pleased to learn that I had passed and could now add CFPS to my business card. Shortly afterward an opportunity presented itself for me to put my function point counting expertise to use on a relatively small software application (just a couple of days of counting). Those were of course several of the longest days of my life.(Begging the question – certified or certifiable?) Quite frankly, Function Point counting is tedious and boring (or maybe I just landed a particularly boring application); any thoughtful effort to automate it gets the thumbs up from me. Needless to say I decided to forego the business card update and stayed under the function point counting radar until my certification expired.
I believe automation will be a good thing and that it will benefit our industry. If we are able to agree that the rules of automation are good enough to represent adequately most of the software that we develop, while also maintaining sight of the situations where manual intervention is required, we have a decent chance of being able to conquer some of the issues that have plagued our industry for years. From an estimation perspective this would certainly facilitate the drive for delivering estimates that are data driven.
I have not taken a detailed dive into the specification recently released but it’s on my to do list as I am very interested in what artifacts in software code will be the things looked at to determine number of Function Points. Clearly the developers of this standard had to make some adaptations to the actual rules to automate a pretty human reliant process. There’s lots of passion in our industry for Function Point counts because of the promise they delivered, it will be interesting to learn where the industry experts fall on the feasibility and practicality of automating function point counting as they become familiar with the recommendations currently on the table. What are your thoughts on the subject?
This past year PRICE Systems has entered into a partnership with the International Benchmark Standards Group (ISBSG). As part of this partnership we have a corporate subscription to both of their databases – the Development and Enhancement Database and the Maintenance and Support Database. We can use these for analysis and to develop metrics that will help TruePlanning users be better software estimators.
The ISBSG is one of the oldest and most trusted sources for software project data. They are a not for profit organization dedicated to improving software measurement at an international level. Their commitment to protecting data and ensuring that the identity of sources of individual data points is kept confidential encourages organizations to contribute to this non-partisan attempt to help improve the software community through better benchmarking and measurement. The ISBSG database is focused entirely on functional size measurements such as IFPUG Function Points, NESMA Function Points, COSMIC Function Points, etc. The data in the database is more focused on commercial and business than aerospace and defense – although there are still some nuggets in there that we all could learn from.
At PRICE, we have started several different initiatives to incorporate ISBSG lessons learned into our product offerings. We have done several productivity studies based on what we learned from this data. Table 1 shows a language productivity study conducted using the IFPUG Function Point data in the ISBSG database. For this study we looked at PDR (Product Delivery Rate) which is in units of hours per FSM (Functional Size Measure). Figure 2 shows the results of a study highlighting Productivity by Industry Type. Each of these tables provides useful general guidance as well as providing some insight into the types of data and industries covered by this database.
Table 1: Productivity by programming language
Table 2 Productivity by Industry Type
We are pursuing several other initiatives that utilize ISBSG data to improve the TruePlanning experience. We are studying the COSMIC data in an effort to develop a COSMIC Function Point estimating model. The database has sufficient COSMIC data points and the initial results are promising. Additionally we are doing a True S calibration across Industry Sectors and Application Types to create Software Cost Object templates for representative types of applications within a sector for both New Developments and Enhancement Projects. We expect the deliverable from this study to be a collection of calibrated cost objects that will inform software estimates for specific industries and application types.
This week I got the new iPhone 5 – and I have to say that it’s a ton better than my old Blackberry (which – to be fair – was an old model and on its last legs with a battery that couldn’t hold a charge for more than an hour). After some consultation with the teenagers in my life, I have started to populate my phone with some of the latest and greatest apps. Hey I can now talk to my phone and it understands my commands and carries them out!!!
Being the software dork that I am, the presence of cool apps on my phone leads me to ponder how mobile application development differs from development of software hosted on more sedentary platforms. Mobile application development is still in its infancy so a lot of what’s going on in the industry has a bit of a Wild West feel to it. There are many ways to categorize mobile applications. One distinction is whether the application is native – which means the entire app runs on the smartphone or tablet – or whether it is a web application with a small client on the device interacting with an application running on a remote server. Another way to categorize mobile apps is by the types of functions they perform. Basically they can be lumped into several categories as follows:
- Basic table functionality – simple drill down to get information on a topic
- Data driven functionality – access and present data from a data source either local to the device or from an external source
- Device functionality – offering improved usability for the hardware features of the device such as the GPS or camera
- Fully dynamic functionality that relies on external information such as Twitter or the weather channel
- Custom utility function – allows the user to enter content in various forms – such as a sketchpad or document creation utility
So how different is mobile application development from traditional software development. In some ways not so much – we still need to understand and execute requirements, design, code and test. There are however several areas where these activities need to be approached differently:
- Applications need to be developed for multiple versions of multiple mobile operating systems and need to be compatible with multiple different hardware devices
- Applications need to respond to various forms of external data from sensors, touch screen, a real or virtual keypad, GPS device, microphone, etc.
- Applications may need to respond to the movement of the actual device – so the screen adjusts when the user changes the orientation of the device.
- Often mobile applications will need to share elements of the user interface with other applications
- Developers of mobile apps need to be aware of resource consumption
- Testing of mobile applications offers new and unique challenges. Simulators and emulators exist and can be helpful in some circumstances but they are not always easy to use, effective or efficient
- Development platforms are at varying degrees of maturity
The relative newness of mobile app development combined with the rapidy emerging technological advances (I repeat - I can talk to my phone!) means that we are a ways from a full understanding of the costs and effort associated with such projects. We can certainly look to the similarities with traditional apps as a starting point and acknowledge the risk associated with the unknowns.
What do you see as the biggest challenges for mobile application developers?
I am frequently questioned by clients and prospects about the applicability of PRICE’s parametric software estimation model to agile software development projects.
There are several ways one could respond to this. My first thought is that if a shop is truly agile, they don’t need an estimation tool. They know their development team velocity because agile teams are committed to measurement. They also either know when they need to make a delivery – in which case whatever amount of software they’ve build by that point will be released. Alternatively they may know a minimal feature set without which the product will not add value to the customer base – in which case whatever amount of time it takes to get to that minimal feature set is how much time will be spent (with the understanding that being agile means even this minimal feature set may be redefined by the end of the project). The nature of agile development requires estimation to be done on a very low level and be applied only to the user stories involved in the current iteration.
This answer, while generally acceptable to the development team, is often a bad answer for the business that needs to develop plans and create splash and sparkle around an upcoming software release. The business needs to have a good idea when software will be delivered and what sets of features they can expect to be in that software. To folks with these requirements my answer to the above question is a resounding YES. In fact this is a perfect application of parametric estimating techniques because it allows for a union where the forecasting minded business side of the house can apply what they learn from the measurement committed development team to the experiential knowledge and requirements that they bring to the table. This creates an environment where a plan can be formed based on the business’s best guess of what the final product will deliver along with some hard firm data about the productivity the team has been able to deliver in the past.
The next obvious question is how to ‘tell’ the parametric model that an agile development technique is being employed and how to translate from story points to a more traditional unit of software size measurement. To questions such as this I am quick to point out that agile development is a paradigm based on a set of tenets written by some pretty smart software dudes. Different agile shops employ different practices and it is a thoughtful understanding of the software being developed and the practices that are employed that will lead to success when one is applying parametric techniques to estimate software cost and effort. As with so many other aspects of software development, there really is no magic bullet to up-front estimating of a software project, whether or not it is an agile project. For more on this topic check out my paper "Are Parametric Techniques Relevant for Agile Development Projects”
How does your business handle the potential conflicts between employing agile principles and creating credible plans for affordable and successful product deliveries?
Check out this article about the Defense Information Systems Agency (DISA) and their cloud computing strategy. With the DOD’s ever increasing focus on affordability moving eligible capabilities to the cloud is an excellent plan for the government. DISAs strategy includes the consolidation of data centers and network operations centers and the migration of 1.4 million Army email accounts to the cloud.
Cloud computing allows organizations to utilize applications, platforms and hardware through the Internet (or some other network) rather than having to purchase or lease these items. Cloud computing offers opportunities for cost savings through virtualization and consolidation. Using the public cloud (the Internet) offers additional cost savings as there are many users sharing the costs of services – driving down the cost per user.
DISA has been designated as the Enterprise Cloud Service Broker for the DOD. A cloud service broker is a third part company that manages cloud services for a cloud service consumer across multiple vendors and platforms. Cloud service consumers generally have to deal with multiple cloud service providers – meaning they have to manage multiple relationships, multiple contracts and they have to deal with interoperability issues. The cloud broker mitigates this effect by understanding the consumer’s requirements and tailoring a solution that meets those requirements while dealing directly with the vendors to craft this solution. Cloud service brokers are experts in understanding cloud services and knowing the specifics of many cloud service providers. Darly Plummer, managing vice president and Garner Fellow at Gartner sees cloud brokerage as a “must have” for most organizations.
Has your company started a migration into the cloud and are they taking advantage of Cloud Service Brokers?
The COSMIC method for counting function points arose out of concerns that the IFPUG (NESMA, FisMA) function points are too concerned with data intense business systems and subsequently are not adequate for adequately measuring the size of real time systems. The COSMIC function point counting method has been designed to be applicable to both business systems such as banking, insurance, etc and real time software such as telephone exchanges and embedded systems such as those found in automobiles and aircraft.
The COSMIC method uses the Functional User Requirements as the basis for the function point count. A COSMIC Function Point count is based on a count of the data movements across the boundary. A data movement is defined as the base functional component which moves a single data group. There are four types of data movements:
- Entry – moves a data group from a functional user across the boundary into the process where that data is required
- Exit – moves a data group from a function process across the boundary to where a functional user requires it
- Read – moves a data group from persistent storage within reach of the process that requires it.
- Write – moves a data group from persistent storage within reach of the process which requires it.
To perform a COSMIC function point count, each component is broken down by functional processes and then for each functional process all of the associated data groups are identified. Then for each data group with each process data movements are assigned and classified as one of the following types. The purported benefit of COSMIC Function Points over IFPUG Function Points is that there is no upper bound on the number of data movements that can occur within a functional process.
I did a productivity study for COSMIC Function Points based on the ISBSG data base and found some interesting results. Table 1 shows the results of my analysis and Table 2 compares productivity rates for IFPUG and COSMIC function points.
What I found was that on average, it takes almost twice as long to deliver a COSMIC Function Point than an IFPUG Function Point. This may be because the sample sizes for IFPUG are larger or it could be because the types of projects that are using COSMIC are more complex, possibly lending credence to the notion that they are adequate for more complex systems. What experiences have you had with counting function points using different methods?
This week I’m attending the Better Software Conference in Vegas. I just attended a great keynote given by Patrick Copeland of Google. The topic was innovation. He talked about how innovators beat ideas, prototypes beat prototypes and data beats opinions. These sentiments are all part of the pretotyping manifesto
He started with the truth that most new products and services fail and proposed that while this is not unexpected, there is a good way to fail and a bad way to fail. The good way to fail is to fail fast. And this is where the idea of pretotyping comes in. The idea of pretotyping or ‘pretendotyping’ is that you fake it before more make. When a good idea hits, find the fastest, cheapest way to get something that will demonstrate and socialize the idea to at least some segment of the target marked. A pretotype is different than a prototype in that a prototype is intended to prove that the product can be built, while a pretotype is intended to prove that the ‘it’ you’re building is the right ‘it’.
Here are some examples of pretotyping exercises. When the idea of the original Palm “Pre” was first conceived, the inventor was concerned that the model wouldn’t catch on. Would people be comfortable carrying around a device in their pocket, taking it out and making notes during meetings and during conversations? Before building anything, he went to the garage and created a wood palm pilot and a wooden stylus. He walked around for days pretending to check and record appointments and log notes – gauging reactions of people around him. When Google held a workshop to brainstorm ideas for Android, they handed out post-its and pencils. Apps were papertyped and ported around as real Android apps in paper form to determine if the concept was feasible – were they something the end user would be likely to stop and pull out their phone and use? This notion of papertyping led to Androgen – an app that let innovators create quick and dirty implementations of their Android app ideas to test market them with minimal effort to get market feedback.
One example presented where pretotyping would have helped. Thirsty Dog Bottled Water for Pets - no I’m not messing with you – this product was actually launched and marketed. Maybe if there had been some pretotyping – taking some regular bottle water, changing the label and putting on the shelves in a few pet stores – the producers would have realized that this was an idea that wouldn’t fly.
So the next time you have a good idea you want to productize – before you start writing code – look for some creative ways to let the potential consumers assess whether it has the look, feel and comfort of use such that they might actually use it. What techniques do you use to market test your ideas? Leave a comment
Ever wonder what programming languages are the most productive? I recently did a little research into this topic using the International Software Benchmark Standards Group (ISBSG) database. The database contains over 5000 data points with size and effort data for projects from a wide variety of industries, applications and counties. Of course not all 5000 data points were suitable for my investigation. The software size is measured using functional size metrics but the database accepts projects that use various counting methods. I narrowed my search to projects that used the International Function Points Users Group (IFPUG) definition of a function point. The database also accepts incomplete project data, asking the submitter which of the following phases of software development are included in their submission: planning, design, build, specification, test, and implementation. While many of the data points reflected complete lifecycle effort, those that did not needed to be normalized to enable a side by side comparison.
I started by calculating average productivity (hours per function point) for each programming language that was suitably represented in the database. This led to results that were statistically all over the place. So I changed it up a bit – looking at average productivity rates within size stratifications. This led to results that appeared much more reasonable to me. And for the most part the productivities were as expected but one finding from this study was particularly interesting. For pretty much every language the least productive projects were the smallest ones. This seems counterintuitive because most studies indicate a diseconomy of scale in software projects. The ISBSG database does provide for new projects and enhancement projects but it is not easy from the data that’s submitted to determine concretely how much reuse a project includes regardless of whether it’s new or an enhancement of existing software. So this may be an indication that larger projects have more opportunities for reuse. Whether or not this is true, the following table can certainly be used for sanity checks as one is estimating their software projects. This is also a window into the types of data that are available through the ISBSG. If you’re interested in a more detailed analysis of some or all of these 5000+ data points – check out what the ISBSG has to offer at www.isbsg.org
Recently I have been playing around with the International Software Benchmark Standards (ISBSG) database for Development and Enhancement projects. And in the interest of full disclosure I admit that I am more than a little excited to have close to 6000 data points at my fingertips. I will further admit that there’s something quite daunting about having this much data; where to start, what should I be looking for, how can I best use this data to offer some useful guidance to inform software cost estimation. For those of you not familiar with this data source, the ISBSG database contains data on software projects submitted from software development organizations all around the world.
Naturally, the first thing I wanted to do was look at Project Delivery Rates (PDR) and try to find some interesting information about what might drive PDR. Starting small (ish) I filtered the data to eliminate all those data items which the ISBSG has rated as of low or questionable quality and filtered all those projects which had included in their labor totals hours for resources not strictly part of the development team. I started by trying to trend Functional Size with PDR. The data was all over the map. In an effort to get some context as to where the good data might be lurking, I began to look at average PDR rates for each Organization Type. I selected Organization Type because this seemed to be the most granular category that had some structure to it. Organization Type is intended to indicate the type of organization that the software application is intended for. Although an Application Type is provided, this value is free form with each submitter choosing their own terminology. The same is actually true for Organization Type but because of the submission process there was a finite set of responses which could be used as a basis of stratification. Although the ISBSG data is measured using functional size measures, the list of ISBSG acceptable Functional Size Measures is long and includes IFPUG, NESMA, COSMIC, Mark II, FiSMA, etc. In order to compare apples to apples my analysis needed to focus on each size unit individually. I started with IFPUG because this group contains significantly more data points than any other functional measurement category. I thought I would share some initial findings. The following table shows the productivity rates for various Organization Types.
So what does this table tell you? It certainly needs to be interpreted with care as you will note that for many of these productivity rates the distribution is all over the map. For industries where the sample size is significant it gives some pretty interesting comparative information. It also provides some useful information into the types of industries for which you can find data in the ISBSG database (not inclusive because this did not cover all the data points – only the IFPUG ones) There is also the caveat that for different submitters have different ideas about definitions of things like industry type. Despite that I think there’s something to learn here. If you’re interested in data, and who isn’t you should check out the ISBSG’s offerings – they have a pretty cool arrangement using an OLAP interface to allow you to find and pay for only the data you can use. Check it out at www.isbsg.org !!
And I'm just getting started - check back for more observations on software productivity!
When software developers first starting writing programs for the Windows ® Operating System it wasn’t pretty. Everything had to be done from scratch – there was no easy access to tools, libraries and drivers to facilitate development. A similar tale can be told by the earliest web site developers. A web application framework is an SDK (Software Development Kit) for web developers. It is intended to support the development of web services, web applications and dynamic websites. The framework is intended to increase web development productivity by offering libraries of functionality common to many web applications. In the early days web development was done in HTML with CGI (Common Gateway Interface) making it possible to create dynamic content. As websites became more pervasive, in fact in many cases they quickly became critical to the success of a business, new languages were developed specifically for web development such as PHP and Cold Fusion.
Web application frameworks are merely the next generation of web development. Instead of being just a compiler, the web application framework gathers libraries of functionality useful for web development into a single environment – offering developers one stop shopping for the tools they need to develop applications for the web. There are many different web application frameworks for many different web development languages. Some examples include JavaEE, Open ACS, Catalyst, Ruby On Rails and Symfony
Web application frameworks offer a variety of features intended to increase productivity of the web application development process. Not all frameworks contain all of these features –  contains a fairly comprehensive analysis of which frameworks offer which features for many popular programming languages. The most common features of a web application framework include:
- Caching – frameworks allow developers to build speed into their web applications by storing copies of frequently accessed data to speed up refreshes. This can make the website appear to load more quickly, while also reducing bandwidth and load.
- Security – frameworks offer tools to address user authorization and authentication along with the ability to restrict access based on established criteria
- Templating – frameworks offer the developer the ability to create templates for their dynamic content. The templates can then be used by multiple data sets.
- Data persistence – frameworks often contain a set of features to support persistence such as consistent Application Programming Interface(API) for accessing data from multiple storage system, automated storage and retrieval of data objects, data integrity checks, and SQL support.
- URL mapping – frameworks often provide a mechanism to map URLs from a clean and uncomplicated looking url to one that leads to the right place.
- Administrative tools such as common interface elements to form fields such as a date field with calendar and automatic configuration which eases the storage and retrieval of data objects from the database
Web applications frameworks are intended to increase the productivity of those folks who build websites. Other potential benefits include:
- Increased abstraction - business logic can be separated from implementation details
- Increased time to market and development productivity
- Increased reused
- Enforced best practices
- Ease transition from one platform to another
Some or all of these features and benefits are realizable depending on the specific web application framework employed. As with all advances in technology, there is necessarily an investment in not only the technology but more importantly in training and education. It is not realistic to expect immediate payoff until developers move down the learning curve with the technology.
As always, it is wise to remember the lessons of Fred Bookes in "The Mythical Man Month" that there are no silver bullets. While Web Application Frameworks features of enhanced tools and libraries will increase the productivity with which code is delivered, these remain the accidental complexities of software engineering; the essential complexity of crafting a software solution to a problem is not going away.
* Test and simulation tools which apply model based testing and create simulated executions of the models
* Reverse engineering tools which transform legacy applications into models
* Portability is achieved as one PIM can be transformed into multiple PSMs for different platforms
* The use of standards and models ensures that solutions are interoperable and of high quality.
* MDA requires a specific and specialized skill set that may not be easy to find and may come at a high price
* Although platform independence is an important aspect of MDA, there are not universally applied standards for interoperability so vendor lock in is a possible problem
* In some cases there appears to be a gap between the vision of complete transformation and the reality of post transformation adaptations of transformation artifacts.
As always, it is important to remember the lessons of Fred Bookes in  that there are no silver bullets. While the MDA promise of automated transformations across many level so abstraction will increase the productivity with which code is delivered, these remain the accidental complexities of software engineering; the essential complexity of crafting a software solution to a problem is not something that can be automated.
 Software Pointers, 10 Hand-picked MDA Tools @ http://www.software-pointers.com/en-mda-tools.html (Retrieved Feb 2012)
 “Model Driven Development for J2EE Utilizing a Model Driven Architecture (MDA) Approach Productivity Analysis, the Middleware Company. June 2003
 http://en.wikipedia.org/wiki/Model-driven_architecture (retrieved Feb 2012)
, Brooks, Fred; “The Mythical Man Month: Essays on Software Engineering”, Addison-Wesley Publishing Company, Phillipines, 1975
* Modularity minimized the effect of size on quality. So while it has historically been true that larger software programs were likely to have higher defect densities – increases over time in the practice of high modularization have served to mute or mitigate this trend.
* The use of the waterfall development methodology produces code with better scores than agile for transferability and changeability – meaning these apps are likely to be easier to read, understand, maintain and address technical debt.
* Business applications have an average of $3.61 worth of technical debt for per line of code. – and this is, admittedly a very conservative estimate if you review the methodology used to calculate it
Low quality code must be refactored more frequently than high quality code and there is substantial evidence that maintenance interventions tend to lead to even more degradation of the quality of the code. So not only are low quality applications more expensive to maintain, the unit maintenance cost also increases over time. The authors use the term entropy to describe the phenomenon and introduce the concept of entropy-time as a means of creating a standard for measurement of this degradation over time due to subsequent refactoring activities.
The authors use this notion of entropy-time to conduct an empirical analysis comparing the maintenance costs for Open Source applications with the maintenance costs that come from a traditional maintenance cost estimating model developed through study of software maintenance efforts for proprietary software applications. The data studied confirmed their hypotheses. The study was rigorous and my intent in mentioning it is to pique your interest not to be comprehensive, so check it out if you’re interested.
What interests me are the conclusions the authors draw and their suggestions as to why these may be so. The authors posit that Open Source Software may be higher quality software because it is being developed by people all over the planet where there is no (or little) direct communication requiring the development to be modular and very tightly coupled. They further suggest that the interests of open source developers are more motivated towards quality because of the pride they take in their work or because they know that their work will be viewed by the masses. The final factor they suggest might be responsible for this increase in quality with Open Source is the fact that schedules are self inflicted rather than in proprietary efforts where schedules are driven by customer or market demands.
I thought it was a great study with thought provoking results. The authors ended with several examples where companies adopting a mixed public/private model has resulted in high quality software successes. So maybe the software development community should look at where adopting open source practices might improve the quality of the software we are producing?
There is much about this discovery that is disturbing. Nothing good happens when large IT projects go off the rails. Money is lost, careers are ruined, and businesses tank. And going forward – it’s not getting any better because projects are getting more not less complex.
OK – while this study is eye opening in some sense - unless you’ve been living in a cave it’s not really news that lots of large IT projects fail. It seems to me the primary reasons for this are
* Failure of business leaders and IT personnel to communicate successfully about the problem to be solved and the plans for how to solve it.
* Project plans that evolve from optimism, bravado, or capitulation
* Failure to understand that change is hard and that technology alone will not effect change
* Refusal or inability to learn from history
* Inability to accept that changing or adding requirements to a software project can have far reaching effects that cost money and take time (seems like a no brainer but it happens all the time)
* Leadership that acts without introspection, self awareness, courage or good sense.
In other words there’s a very human element to most IT Project failures. Some things that businesses can do to mitigate the likelihood of such failures:
* Business leaders should work collaboratively with IT on all aspects of a projects – conversations should be two way with both sides listening to the issues and concerns
* Organizational history on like projects should be studied. If no history exists, look externally to learn what works and what doesn’t in your industry
* Tools and processes should be used wherever possible to support project estimation, planning and decicion making without emotion or bias
* Change needs to be championed from the top down
* Evolve the project in small achievable chunks. Assess progress regularly. Have a plan for how to identify problems as they arise and a criteria for when it is time to cut your losses
* Business and IT leaders need to act with knowledge of the business, knowledge of their teams, honest and realistic progress assessment, and courage to make hard decisions.
Certainly none of this is rocket science. But it seems to me that any organization contemplating a large scale IT change initiative should first turn eyes on their organization and their past history to see how well or poorly they have addressed the issues outlined above.
The caution this article carries about the use of this metaphor is not around the realization that decisions made to shortcut the process or defer functionality will cost in the future, but rather around quantification of how much they will cost in the future. When we incur financial debt we basically know what we owe – within some uncertainty about future economic conditions. But as the author points out in order for technical debt to successfully facilitate conversations with the business there needs to be trust established about the software team’s ability to quantify the magnitude of the debt.
So how do software teams gain this trust and create successful negotiations around software project decisions. First of all they need a good language around which to have the conversation. Using source lines of code or function points as a basis for describing the size of the debt is a good start. An even better measure would be software size augmented by factors that indicate innate complexity and quality level (each of these is basically unit-less in the large but can be unitized within an organization) Good historical data is a great place to start understanding how size, complexity, quality all contribute to technical debt. Teams that have shown successes through success software estimation are the teams who the business will listen to when the suggest that this shortcut will create this X hours of additional work effort in a future release or that the additional overhead associated with not fixing this problem now will be Y hours.
How does your organization go about quantifying technical debt?
I have recently being following an animated thread on LinkedIn “Death of a Metaphor – Technical Debt.” It’s been live for 2 months with over 200 contributions from dozens of different people. The discussion was launched by questioning whether continued use of this metaphor makes sense. The discussion thread weaves and bobs around actually answering this question but it’s amazing how passionate the world is on this topic. My personal opinion is that it’s a perfectly adequate metaphor because it helps create a discussion between IT and the business leaders in terms that both can understand – dollars and cents.
Sometimes during software development decisions are made to forgo structural quality to meet some other objective – additional functional requirements, time to market, etc. How many of us have been involved in a development project where it was more important to get the product out the door than it was to get it done right – we take short cuts with the intent to go back and do it right once the immediate fire drill is over. The problem is that once the product is out the door and meeting the customers expectation - how do we convince the business that there is as much (maybe more) long term value in fixing a product that is seemingly working than in investing in new features that will make the product more appealing to more users. What business leaders may not understand is the cost of continuing to maintain and grow a structurally questionable - sometimes brittle – application. Technical debt is the monetization of that cost making it possible for IT to communicate the value to the business of creating and maintaining a structurally sound application. Check out this blog for a good discussion of technical debt and how to prevent accruing it. “Technical Debt gets the Message Across”.
The IEEE published “Top 11 Technologies of the Decade” in the January 2011 editions of the IEEE Spectrum magazine. It should come to a surprise to no one that the Smartphone was number 1 on this list. The answer to the author’s question “Is your phone smarter than a fifth grader” was a resounding YES!
In 1983 Motorola introduced the first hand hell cellular phone. It weighed in at two and a half pounds, had memory capacity for 30 phone numbers, took 10 hours to recharge and had a selling price of $4000 ($8045 in 2006). The phone was the size of a man’s head and would sustain an hour of conversation before a recharge was required. In June of 2007 Apple announced that it was launching the iPhone which would basically integrate all of the electronic gadgets teenagers carried around in their pockets into a complete package – phone, web browser, iPod and camera. The iPhone 5 which should be available by Fall 2011 incorporates NFC technology, an upgraded operating system with cloud integration, music streaming, voice interface, 4G connectivity and an embedded social networking tool. NFC technology makes it possible to use the phone effectively as a credit card – making payments by swiping the phone near a device that can read its information. 
The proliferation of smartphones and tablets has led (and will continue to lead) to the proliferation of mobile applications. And it seems as there are no bounds to the kinds of applications that are being developed for smartphones. A sampling of some popular applications is listed below:
* Chase Mobile – which allows users to check account balances and review transactions
* Angry Birds – very popular gaming software
* Facebook - allows users to report their status from anywhere
* Yelp – allows users to locate places to eat, shop , etc. along with reviews from local patrons of said establishments
The list goes on but clearly if you can dream it – there’s an app for that (or at least there could be an app for that). As practitioners in the art of estimation, all this mobile application development leads to the inevitable question of what does it cost to develop mobile apps and how is mobile application development different from more traditional forms of development?
Mobiles apps can be categorized as either native applications – which run entirely on the device or web applications - with small clients resident on the device which interact with applications running on a remote server. It appears that in general the web applications are less complex than the native ones, and thus less effort is required to build, test and deploy them.
While mobile application development is still in its infancy so a lot of what is going on in the industry now has a bit of the Wild West feel to it. This stage of any technology is impacted by learning curve issues which may dampen productivity but at the same time there is the newness factor – where smart people are excited about the promise of new technologies and are willing to work extra hard to make things happen. So while there is some effort/cost data available for mobiles apps, we must temper our enthusiasm.
Another concern when developing mobile applications is which platform(s) it is being developed for. If an application is being developed for iPhone, Android and Blackberry the effort is significantly increased. Although there are elements of the design that can certainly transcend platforms, each of these platforms has its own operating system and development environment. There are additional potential compatibility issues in cases such as Android where there are multiple companies manufacturing devices. Additionally, application developers need to determine which versions of OS for each of the platforms the application will work with. They also need to be aware of and respect the user interface guidelines developed for the device(s) for which they are building apps.
Mobile applications may need to respond to various forms of external data from sensors, a real or virtual keypad, a GPS, microphones, etc. They also may need to respond to movements of the actual device as well - so the screen adjusts when the user changes the orientation of the device. There are also many instances where mobile apps will need to interact with other applications on the device. Often the mobile application will need to share elements of the user interface with other applications.
Mobile applications need to be developed in such a way as to limit the consumption of resources. No matter how good your killer app is, if it drains the phone battery in half an hour no one is going to use it. Along with all other applications that have access to the Internet, they should be built with a focus on security so that the users data is protected from malicious intrusions.
All of the issues listed above are likely to impact the complexity of the development effort of the mobile application – thus they have the potential to be an important part of the cost equation as well. In many cases mobile applications are smaller than traditional applications so increased complexity may be offset because the projects and corresponding project teams are small. Still this increased complexity is important to consider.
Another area where mobile apps are dramatically different than traditional apps is in the testing of the application. Simulators and emulators exist and can be helpful in some circumstances but they are not always easy to use, effective or efficient. There are also issues, with some platforms around the maturity of the technology that allows applications to be transferred from the development environment to the mobile device – further complicating the testing process. Finally there is just the sheer magnitude of making sure that the application functions correctly on all the combinations of hardware, operating system and carriers on which it will be expected to perform. The cost and effort associated with testing may present significant differences when being evaluated for mobile applications.
Smartphones are here to stay and more and more businesses will want (or need) to develop apps for them in order to remain competitive. With the technology still relatively immature there is limited data that will help us develop cost models but there is feedback from the field on what issues are most likely to impact costs. Though the technology is still emergent – there is some data available from commercially based mobile app developments – so at least we have a place to start.
Share your experiences with mobile application development by leaving a comment for this post.
 Ross, Philip E., “Top 11 Technologies of the Decade”, IEEE Spectrum, January 2011
Check out this Report on Big Data from McKinsey & Company published in June 2011.
Back in the day, when personal computers were becoming widely accepted and Microsoft Windows© was the new cool thing, SneakerNet was a common means of sharing data. Certainly the introduction and subsequent improvements of networking technology and the Internet have made data sharing a whole lot easier and quicker. But the concept of Big Data creates a whole new level of opportunity and potential for collecting and using data in ways heretofore unthinkable.
So what is Big Data? According to the report referenced above – “Big Data” refers to datasets whose size is beyond the ability of typical database software tools to capture, store, manage and analyze. The authors are cautious not to declare a specific size since lightening fast technology advances may quickly make their number wrong thought they do propose currently that datasets in the few terabytes to several petabytes (1000’s of terabytes). (hmm… I just had to add petabytes to my Word® 2007 dictionary). The concept of Big Data requires the collection of data from multiple sources (sensors, smartphones, GPS’s, social media postings, data collected by government agencies, etc.) and performing analysis of that data in ways that will make life better and bring more value to businesses, consumers, citizens, etc.
Some examples of Big Data types of applications include;
* Marketing initiatives like Amazon.com’s “you might also want …” admonition based on information available about your buying patterns and the buying patterns of those purchasing the same item
* Applications like RedLaser which lets a shopper scan a bar code of an in-store item with their smartphone and get immediate price and product comparisons
* Improved supply chain management through access to data across the supply chain helping manufacturers optimize planning and delivery of new products
* Mobile phone apps that allow merchants to track customers from the moment they enter the store to determine traffic patterns and flows through shelves and displays
* Capabilities like OnStar ® where sensors in the automobile send real time data to the service if the airbags deploy or to alert that a system is malfunctioning.
* Capabilities like ShopAlert that sends coupons or offers to smartphones of subscribers when they are in the vicinity of a store, restaurant or bar
The report actually contains a lot more examples across various platforms and sectors. They have specifically Health Care, Public Sector, Retail, Manufacturing and the ever growing business of using personal location data. Within each of these categories – opportunities for creating value as well as potential barriers to adoption are presented.
While the technology and the possibilities create lots of excitement – there are also some areas of concern – how much personal data is too much and how close do we want “Big Data” to look like “Big Brother”. Certainly there are lots of issues that need to be addressed at all different levels before Big Data goes wildly mainstream – but it seems to me that the possibilities for value and capability will, in many cases, be worth it.
How have you seen Big Data used? What possibilities can you see for Big Data applications? Leave a comment to this post to share your Big Data stories.