Arlene Minkiewicz

As Chief Scientist for PRICE Systems, provider of world class cost estimating tools and services, I spend my days researching new technologies, processes and techniques associated with the development and production of hardware, software and systems.  While the end goal is to develop an understanding of how various factors impact cost, the journey requires that I learn new things every day about how the world works.  I frequently publish or present these findings to the cost and measurement communities in various venues around the world.   The primary focus of my research (and my main passion) is all things related to Software Development and Information Technology although this has not precluded occasional forays into topics such as hardware manufacturing, systems engineering and composite materials. The goal of my research is arming cost estimators with the best technology available to address their estimating challenges and help them achieve estimating accuracy.  Read more about Arlene Minkiewicz

Follow me on twitter  

(0) Comments

NASA Cost Community Boldly Moving Forward....

I had the distinct pleasure last week of attending the 2014 NASA Cost Symposium.  While to the uninitiated this might sound like a bit of a snoozer – it was actually quite interesting and proved to be the source of a ton of valuable information.  The event took place at Langley Research Center in Hampton, VA – near Williamsburg, Newport News, and not too far from Virginia Beach.  My participation was somewhat self-serving in that I was there to talk about PRICE’s new Space Missions Cost Models for TruePlanning®.  This model – discussed in an earlier blog – offers a version of the TruePlanning® Hardware and Systems cost methodology specifically tailored to estimate costs for the spacecraft and payload developed for  robotic earth orbital and scientific missions in space. 

Despite the fact that shameless commercialism was one  motivation for my attendance, this in no way inhibited the geek in me from learning a lot from all the smart people who work for or provide services to NASA.  Here’s a sampling of the things presented and discussed:

  • NASA is just finishing up on the PCEC (Project Cost Estimating Capability) model. This model, the next generation of the NAFCOM (NASA/Air Force Cost Model) has been developed based on the data that NASA collects on missions and their equipment.
  • NASA continues to collect cost and technical data on missions and equipment in order to enforce continuing process improvement with respect to their cost practices.
  • Cost and schedule risk and uncertainty analysis continue to be paramount in the space cost community.  While introducing Joe Hamaker’s talk on Risk Analysis, symposium organizer and MC was quick to highlight the agenda of the 4th Annual NASA Cost Symposium, held in 1984, where Joe Hamaker gave a talk on Risk Analysis.
  • JCL (Joint Confidence Level) is still important in the NASA cost community.  JCL requires that cost and schedule estimates be prepared and presented with a specific confidence level.  If you present an estimate with a cost estimate of X and a schedule estimate of Y with a JCL of 70% - this means you are 70% confident that the program will be completed at or below a cost of X and at or before a schedule with duration of Y.  Over the last several years, I have watched NASA cost and schedule folks work hard to get their heads around the JCL challenge and its starting to look like its working well for them.
  • Schedule estimation is hard and there are many complexities associated with it.  A constant concern for NASA with cost and schedule estimation is the impact of schedule growth.  Because so many of their missions depend on (literally) the aligning of the stars – being late with spacecraft, payload or a launch vehicle can have deleterious effects on the success of a mission.  It was good to hear that some pretty smart people are giving serious thought to way to better estimate schedules and effectively assess the uncertainties and risks inherent with those schedules.

It was a very full three days of presentations on cost estimation, schedule estimation, cost risk, and uncertainty, schedule risk and uncertainty as well as the tools and databases available to support these estimates and analyses.  In addition to all this, the NASA Cost Symposium continues to be an excellent venue to get out and talk to people with very interesting cost and schedule estimation  challenges.  Not only did I get the chance to catch up with some old friends, I got the opportunity to learn what’s new and exciting in the world of space travel and exploration.
Click here to learn more about the NASA cost symposium.

(0) Comments

Space Missions Cost Model

July 2014 marked the 25th anniversary of Neil Armstrong’s historic stroll on the moon.  If you go to the NASA website and select Missions you’ll probably be amazed at the number of missions in NASA's past, present, and future.  Unless you’re living under a rock, you know about the International Space Station, and the Hubble telescope but I’m guessing there’s a lot about space missions that many of us are unaware of.  The Dawn spacecraft, which was launched in 2007 from Cape Canaveral, was sent into Space to help NASA scientists learn about the history of the solar system.  This spacecraft, expected to remain in flight for nearly a decade, will study the asteroid Vesta and the dwarf planet Ceres to learn how they have evolved over time.  Dawn has also sent some spectacular images to NASA scientists. The Orion spacecraft is scheduled for first test flight later this year.  Orion is built to take humans farther into Space – eventually all the way to Mars.  These are but a few examples.

My point is that there’s some pretty cool stuff being accomplished in space. But this progress does not come for free.  In fact it comes at substantial costs; it is an incredibly important part of the NASA planning process to appreciate the costs of a mission long before all of the specifics of that mission are ironed out.  PRICE Systems has recently ported the Chicago Cost Model (formerly a complex spreadsheet implementation) into the dynamic TruePlanning® framework.  This solution couples a time-tested cost estimating methodology supporting NASA mission analysis for over twenty-five years with the power and structure of the TruePlanning® framework.  The Space Missions Model for TruePlanning combines PRICE’s hardware estimation methodology with space-specific data and terminology, creating a space-focused solution. This model can be used to estimate costs for formulation through implementation for robotic Earth and Space science missions through 10 new cost objects pictured below.

To learn more about this new Space Missions estimating capability – attend my webinar next week on July 30, 2014 at 1:00 PM EDT.  Follow this link to register.




(0) Comments

IFPUG SNAP - maybe not so fast

Whether you’re doing a software cost estimate to support a Bid and Proposal effort, a software valuation, should cost analysis, or to develop a detailed project plan, it is vitally important to understand the ‘size’ of the software you are estimating.  The problem with software size is that it tends to fall into the intangible realm of reality.  If you tell me you are building a widget that weighs 13 pounds, I can really start to get my head around the task at hand.  If I’m chatting about this with my European colleagues, I can apply a universally accepted conversion to put it into a context they are more familiar with.  Software offers no such tangible or easily translatable type of metric for size.  If you tell me you’re writing 100,000 lines of code I am still left scratching my head in wonder as to how big the project really is.

Many methods have been postulated over the years to help the software engineering community do a better job of assessing software size based on the functionality it delivers  to provide better cost realism (makes sense to me!). The International Function Point User Group (IFPUG) has maintained and updated counting practices for IFPUG Function points for well over 20 years.  IFPUG Function Points, likely to be the most popular and widely used functional size measure for software, have gained much popularity in many communities but there are two areas where their detractors have focused.  One area is that they are costly and time consuming to use and that automated counting of Function Points is not possible.  The second is that they fail to take into account non-functional requirements when they are used as an estimation tool.

IFPUG has recently introduced a potential solution to the non-functional requirement criticism.  IFPUG Software Non-functional Assessment Practices (SNAP) has been developed to act as a companion to the IFPUG Function Point count.  SNAP counts make allowances for things such as logical or mathematical operations, user interface, multiple platforms, multiple methods, etc.  The techniques are recently introduced and the jury is still out on how well they are working for folks.  There is, however, one caution I would put forth for those thinking of using SNAP.  Keep in mind this capability was introduced to help those who choose to use Function Points as an estimation technique.  In other words they assess project productivity by determining an average of the labor hours necessary to deliver a function point, then estimate forward using this average value.  If you are using Function Points in the context of a commercially available or home grown software estimation tool – it is important to understand that some of the non-functional requirements may have already been accounted for through other model inputs or modeling techniques derived to work around the IFPUG limitation.  Make sure you understand what’s in there (your model and your SNAP count) to minimize the chance of double counting and maximize the chance of a more accurate total cost of ownership model. 

For more information on Function Points and SNAP check out these resources

(0) Comments

Enhance you Bid and Proposal Process by Calibrating your Experts

Proposal estimates based on grassroots engineering judgment are necessary to achieve company buy-in, but often are not convincing or not in sync with the price-to-win.  This contention can be resolved through by comparing the grassroots estimate to an estimate developed using data driven parametric techniques.  Parametric estimates apply statistical relationships to project data to determine likely costs for a project.  Of course, for a parametric model to properly support this cross check of the grassroots estimate, the proper data must be fed into the model.  This most likely requires the estimator to reach out to various subject matter experts.

Before reaching out to those subject matter experts, first read this blog post by Robert Stoddard of the Software Engineering Institute (SEI) on a research effort at the SEI of Quantifying Uncertainty in Early Life Cycle Cost Estimation (QUELCE).  The methodology they are developing relies heavily on domain expert judgment.   One of their first challenges they took on in the process was an effort to improve the accuracy and reliability of expert judgment.  As a jumping off point they relied on the works of Douglas Hubbard the author of How to Measure Anything

The technique they adapted is referred to as “calibrating your judgment”.   Experts are given a set of questions and asked to provide an upper and lower bound such that they are 90 percent certain the answer falls within the bounds.  Feedback shows whether they are too conservative (always right because they set the bounds too large) or overly optimistic.   Hubbard’s research indicates that most people start off being highly overconfident but through repeated feedback of the process become better at applying their expertise realistically.

This is a very interesting study and I think that anyone who is relying on experts to guide important bid and proposal decisions should think about the confidence (or over confidence) of their subject matter experts

(0) Comments

The estimators conundrum

Here’s a conundrum.  You are a software estimator responsible for helping the decision makers in your company determine what business to pursue and what business to steer clear of.  You know, that to win profitable business, your company first needs to decide which opportunities are golden and which should be avoided.  You also know, that at the point at which this decision needs to be made, there is very little information available to support a quality estimate.  Add to this the fact that software estimation is hard   at almost any stage.  What’s an estimator to do?

I I think back to the takeaways from Fred Brooks’ tremendous book The Mythical Man Month. As software development professionals and software estimators we should all occasionally revisit this book because Brooks ‘observations, for the most part, still hold true.  Our languages and development environments have become much more sophisticated; we develop software for mobile devices instead of mainframes; we’ve replaced rigidly sequenced software development phases with agile practices and paradigms.  Despite the great strides we have made in addressing the accidental complexities of software development, we still get tripped up dealing with the essential complexity of crafting software solutions to increasingly challenging problems.  This is what makes software estimation particularly difficult. Basically, Brooks’ point was that advances in software development technology only really improve productivity for parts of the development process.  The pieces of software development that rely on invention and innovation continue to be challenges. 

So back to the original question “what's an estimator to do?”  The good news is that there are things you, the estimator, can do even in the early stages when rough order of magnitude(ROM) estimates are required to support bid/no-bid type decisions.  First of all, as an experienced estimator you should be familiar not only with the accidental complexities associated with software development in your organization.  You should also, through experience and data collection, have a good feel for the essential complexities associated with the types of software being developed.

Software estimation to support good decision-making should certainly not be done in a vacuum.  It should be based on mature estimation practices that should be informed by data representative of software developed within your organization.  Estimation models, whether commercial or home grown, are great tools to help guide the estimation process.  But a fool with the tool is still a fool.  A good software estimates should be final informed with data driven knowledge and a good understanding of both the accidental and essential complexities of the software project being considered.


(0) Comments

It's 2013 - Why do our Software Projects Continue to Fail?

Unless you live under a rock, you are aware of the rollout disaster.  While similar IT failures are regularly in the news, the high profile of has really mainstreamed awareness of the fragility of many IT projects.  Check out this article entitled ‘The Worst IT project disasters of 2013’.  It details IT project failures such as:
  •  IBM’s failure to deliver on a payroll system project that could potentially cost taxpayers up to $1.1 Billion dollars US.  
  •  SAP’s failure to deliver satisfactorily on requirements for a massive payroll project on which the state of California has spent over $250 Million US since 2005
  •  Deloitte’s unemployment system creating problems with the successful managing of unemployment compensation in California, Maryland and Florida
And while the article points to the specific vendors as the root of the problem, it is safe to assume that the vendors may not completely agree and that the end customer may also have some culpability in the failure.  My mission here is not to judge either party but rather to take a minute to muse why in 2013 we continue to have these very public, very expensive failures to deliver software that meets requirements in a timely fashion.
In my quest to find an answer to this question I found myself reading the 2013 Chaos Manifesto  available at this link .  Each year the Standish Group takes a look at IT projects to identify what appear to be the success factors for project success and how important each of these success factors is in relation to the other factors identified.  Although this year’s report focused on smaller IT projects, the success factors identified should resonate across the software industry.  In general Standish has found that the changes of success for small projects are about 70% while large projects are 10 times more likely to fail than small projects.  Part of their recommendations are that large projects should be broken down into small projects rather than tackled all at once.
The factors identified in this report include (in order of importance):
  •  Executive management support  - the executive sponsor should be the most important person in the project with the target for success visibly on his or her forehead
  •  User involvement – projects where the user is not involved perform poorly
  •  Optimization – projects that are properly optimized and take advantage of technology and other methods to improve efficiency are more likely to succeed than those that don’t 
  •  Skilled Resources  - good people increase the chances for project success
  •  Project Management Expertise – good people in the project and process management seats increases changes of success.
  •  Agile processes  - application of agile processes enforces the notion of executive support and user involvement
  •  Less important but also mentioned….
    • Clear Business Objectives
    • Emotional Maturity of the project environment
    • Execution
    • Tools and Infrastructure
It is interesting to note that the same success factors were identified in the 2012 Chaos Manifesto  though the order of importance was slightly different. The first two items on the list (which were first two in 2012 as well) tend to lend credence to my earlier speculation that the end customers for the above mentioned project failures may have some culpability in the failures as well.
Maybe the Chaos Manifesto should be required reading for software contractors and their customers? 
What steps does your organization take to prevent IT Project failures?
(0) Comments

At the Intersection of Big Data and the Cloud

Forrester defines big data as “the techniques and technologies that make capturing value from data at extreme scales economical”.   Wikipedia defines it as “a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications.  The challenges include capture, curation, storage, search, sharing, analysis and visualization”.  Many use the 3Vs to describe the characteristics of big data – Volume, Variety and Velocity.   Basically Big Data refers to number crunching of epic proportion, accomplishing in minutes what may have taken weeks several years ago.
So what does this have to do with cloud computing? Certainly the notion of Big Data can exist without cloud computing.  The question to ponder is whether the notion of Big Data would have been conceived without cloud computing.  The average teenager spends an inordinate amount of time sharing thoughts (text), videos, and photos with their friends via FaceBook,  Instagram, Google+,  Twitter, Pinterest, etc.  In the US in 2011, retail shopping websites earned $162 billion and the number of online shoppers is expected to grow from 137 million in 2010 to 175 million in 2016.  The average number of Google searches per day went from 60 million in 2000 to 4.717 billion in 2011.  These applications all exist in the cloud and their providers take Orwellian interest in every transaction that is made.  How else would Facebook know who we might want to friend and Amazon knows what books to recommend we read?
So cloud computing is certainly an enabling technology for Big Data and has led to vast amounts of data being collected and stored.   Add to this the vast amounts of data collected from other sources through applications and other devices designed to collect and transmit data.  Now consider the  fact that this data is being collected in many formats; text, video, still images, audio,  sensor readings, GPS coordinates, radio frequency identification (RIF)  readers, etc. are all thrown into the pot.    Big Data is the tools and techniques that make it possible to process these large amounts of data in varying formats with lightning speed.  
This brings us full circle back to cloud computing.  A recent RedHat Report indicates that many businesses implemented cloud based environments last year as a way to manage the influx of structured and unstructured data.  The cloud not only provides storage solutions for the vast amounts of data being collected but also provides enough computing power to make analysis and visualization of the data possible.
(0) Comments

Agile Development for Space

Agile development practices have enabled software development organizations to deliver quality software that optimizes the customer’s satisfaction with the value they receive for their money.  That being said, agile development may not be the best approach for every software development project.  Alistair Cockburn, agile development specialist and one of the initiators of the agile software development movement, acknowledges that “agile is not for every project”.  Further elucidating this point, Cockburn opines:  “small projects, web projects, exploratory projects, agile is fabulous; it beats the pants off of everything else, but for NASA, no”. .  While this is true when one considers a fully agile development, this may be too broad a generalization as there are documented cases of some agile practices being successfully modified and applied to space system software.
Clearly space and other mission critical systems have high reliability, fault tolerance requirements with strict safety and performance criteria.  Chances of success for mission critical software development are “greatly increased” with an integrated, effective combination of good technology and solid process which supports the following key principles:
* Effective requirements management and analysis
* Reusable component libraries
* Object-oriented methodologies
* Prototyping
* Quality assurance

While the need for space and mission critical solutions continues to prevail, market tolerance for long running programs delivering monolithic solutions is waning.   New developments for space systems and other mission critical systems are looking for ways to create Faster, Better Cheaper solutions that continue to satisfy the rigorous safety and performance requirements necessary to protect life and preserve value.  Such changes are likely to come about by finding ways to create hybrid solutions which integrate agile practices where possible with more stringent formal methods where necessary.   Some agile practices for consideration in such a hybrid solution include:

* Small teams evolving design in small visible steps
* Daily stand up meetings
* Pair programming
* Continuous automated testing
* Test driven development
* Collaborative planning (involving the customer)

Incorporation of practices such as evolutionary planning, refactoring and little to no documentation require more careful consideration as to their applicability and in many cases should be dismissed outright as they run in cross purposes to goals for fault tolerance and safety criticality common to most mission critical systems.

Check out [1] for a great discussion on the development of Mission Control Technologies at NASA Ames using a hybrid process including many agile practices.  This team has managed to segregate activities constrained by mission criticality from other development activities in such a way as to leverage the benefits of agile practices while still maintaining the rigor and formality where required.

[1] Delivering Software into NASAs mission Control Center Using Agile Development Techniques.  Christopher Webster, Nija Shi, Irene Skupniewicz Smith, Aerospace Conference 2012, available at     (Retrieved Sept 2013)


(0) Comments

Certified or Certifiable - Is there benefit from Automated Function Point Counting?

I recently attended a webinar presented by David Herron of the David Consulting Group (DCG) discussing a recently released specification for the automation of function point counting (available on the Consortium for IT Software Quality (CISQ) site .  Function point counting is a process through which software ‘size’ is measured by the amount of business value that the software delivers to the end user. 

Function Point counts are thought by many to be a far superior means of measuring software ‘size’ because they are technology neutral and not impacted by factors such as programmer style.  A major impediment to wholesale adoption of Function Point counting has been the fact that the process is manual, tedious and takes a lot of time. Source Lines of Code (an alternative means of software measurement) has many critics and yet many still tend to use them as their primary means of measurement because software can be developed to count them consistently on finished software applications.  To achieve consistent Function Point counts one must study the counting practices or standards for function point counting (there are actually 5 standards for different types of Function Point Counts – but we’ll cover that some other day!).  The International Function Point Users Group (IFPUG), focused on the IFPUG Function Point Counting method (the most widely used method of the 5 methods available), has developed and maintains a counting practices manual. To become a Certified Function Points Specialist one must pass an exam that IFPUG administers. 

In a previous life I thought that being a Certified Function Point Specialist would be a useful skill for a software estimation professional like me.  I studied, took the exam and was pleased to learn that I had passed and could now add CFPS to my business card.   Shortly afterward an opportunity presented itself for me to put my function point counting expertise to use on a relatively small software application (just a couple of days of counting).  Those were of course several of the longest days of my life.(Begging the question – certified or certifiable?) Quite frankly, Function Point counting is tedious and boring (or maybe I just landed a particularly boring application); any thoughtful effort to automate it gets the thumbs up from me.  Needless to say I decided to forego the business card update and stayed under the function point counting radar until my certification expired.

I believe automation will be a good thing and that it will benefit our industry.  If we are able to agree that the rules of automation are good enough to represent adequately most of the software that we develop, while also maintaining sight of the situations where manual intervention is required, we have a decent chance of being able to conquer some of the issues that have plagued our industry for years.  From an estimation perspective this would certainly facilitate the drive for delivering estimates that are data driven.

I have not taken a detailed dive into the specification recently released but it’s on my to do list as I am very interested in what artifacts in software code will be the things looked at to determine number of Function Points.  Clearly the developers of this standard had to make some adaptations to the actual rules to automate a pretty human reliant process.   There’s lots of passion in our industry for Function Point counts because of the promise they delivered, it will be interesting to learn where the industry experts fall on the feasibility and practicality of automating function point counting as they become familiar with the recommendations currently on the table.  What are your thoughts on the subject? 

(0) Comments

Lessons learned from the ISBSG Database

This past year PRICE Systems has entered into a partnership with the International Benchmark Standards Group (ISBSG).  As part of this partnership we have a corporate subscription to both of their databases – the Development and Enhancement Database and the Maintenance and Support Database.  We can use these for analysis and to develop metrics that will help TruePlanning users be better software estimators. 

The ISBSG is one of the oldest and most trusted sources for software project data.  They are a not for profit organization dedicated to improving software measurement at an international level.  Their commitment to protecting data and ensuring that the identity of sources of individual data points is kept confidential encourages organizations to contribute to this non-partisan attempt to help improve the software community through better benchmarking and measurement.  The ISBSG database is focused entirely on functional size measurements such as IFPUG Function Points, NESMA Function Points, COSMIC Function Points, etc.  The data in the database is more focused on commercial and business than aerospace and defense – although there are still some nuggets in there that we all could learn from.

At PRICE, we have started several different initiatives to incorporate ISBSG lessons learned into our product offerings.  We have done several productivity studies based on what we learned from this data.  Table 1 shows a language productivity study conducted using the IFPUG Function Point data in the ISBSG database.  For this study we looked at PDR (Product Delivery Rate) which is in units of hours per FSM (Functional Size Measure).  Figure 2 shows the results of a study highlighting Productivity by Industry Type.  Each of these tables provides useful general guidance as well as providing some insight into the types of data and industries covered by this database.

Table 1: Productivity by programming language

Table 2 Productivity by Industry Type


We are pursuing several other initiatives that utilize ISBSG data to improve the TruePlanning experience.  We are studying the COSMIC data in an effort to develop a COSMIC Function Point estimating model.  The database has sufficient COSMIC data points and the initial results are promising.  Additionally we are doing a True S calibration across Industry Sectors and Application Types to create Software Cost Object templates for representative types of applications within a sector for both New Developments and Enhancement Projects.  We expect the deliverable from this study to be a collection of calibrated cost objects that will inform software estimates for specific industries and application types.

(0) Comments

iPhone Inspired Musings on Mobile App Development

This week I got the new iPhone 5 – and I have to say that it’s a ton better than my old Blackberry (which – to be fair – was an old model and on its last legs with a battery that couldn’t hold a charge for more than an hour).  After some consultation with the teenagers in my life, I have started to populate my phone with some of the latest and greatest apps.  Hey I can now talk to my phone and it understands my commands and carries them out!!!  

 Being the software dork that I am, the presence of cool apps on my phone leads me to ponder how mobile application development differs from development of software hosted on more sedentary platforms.  Mobile application development is still in its infancy so a lot of what’s going on in the industry has a bit of a Wild West feel to it. There are many ways to categorize mobile applications.  One distinction is whether the application is native – which means the entire app runs on the smartphone or tablet – or whether it is a web application with a small client on the device interacting with an application running on a remote server.  Another way to categorize mobile apps is by the types of functions they perform.  Basically they can be lumped into several categories as follows:

  •  Basic table functionality – simple drill down to get information on a topic
  •  Data driven functionality – access and present data from a data source either local to the device or from an external source
  • Games
  • Device functionality – offering improved usability for the hardware features of the device such as the GPS or camera
  • Fully dynamic functionality that relies on external information such as Twitter or the weather channel
  • Custom utility function – allows the user to enter content in various forms – such as a sketchpad or document creation utility

So how different is mobile application development from traditional software development.  In some ways not so much – we still need to understand and execute requirements, design, code and test.  There are however several areas where these activities need to be approached differently: 

  • Applications need to be developed for multiple versions of multiple mobile operating systems and need to be compatible with multiple different hardware devices
  •  Applications need to respond to various forms of external data from sensors, touch screen, a real or virtual keypad, GPS device, microphone, etc.
  • Applications may need to respond to the movement of the actual device – so the screen adjusts when the user changes the orientation of the device.
  • Often mobile applications will need to share elements of the user interface with other applications
  • Developers of mobile apps need to be aware of resource consumption
  • Testing of mobile applications offers new and unique challenges.  Simulators and emulators exist and can be helpful in some circumstances but they are not always easy to use, effective or efficient
  •  Development platforms are at varying degrees of maturity

The relative newness of mobile app development combined with the rapidy emerging technological advances (I repeat - I can talk to my phone!)  means that we are a ways from a full understanding of the costs and effort associated with such projects. We can certainly look to the similarities with traditional apps as a starting point and acknowledge the risk associated with the unknowns.

What do you see as the biggest challenges for mobile application developers?

(0) Comments

Parametric Estimation for Agile Projects

I am frequently questioned by clients and prospects about the applicability of PRICE’s parametric software estimation model to agile software development projects. 

There are several ways one could respond to this.  My first thought is that if a shop is truly agile, they don’t need an estimation tool.  They know their development team velocity because agile teams are committed to measurement.  They also either know when they need to make a delivery – in which case whatever amount of software they’ve build by that point will be released.  Alternatively they may know a minimal feature set without which the product will not add value to the customer base – in which case whatever amount of time it takes to get to that minimal feature set is how much time will be spent (with the understanding that being agile means even this minimal feature set may be redefined by the end of the project).  The nature of agile development requires estimation to be done on a very low level and be applied only to the user stories involved in the current iteration.

This answer, while generally acceptable to the development team, is often a bad answer for the business that needs to develop plans and create splash and sparkle around an upcoming software release.  The business needs to have a good idea when software will be delivered and what sets of features they can expect to be in that software.  To folks with these requirements my answer to the above question is a resounding YES.  In fact this is a perfect application of parametric estimating techniques because it allows for a union where the forecasting minded business side of the house can apply what they learn from the measurement committed development team to the experiential knowledge and requirements that they bring to the table.  This creates an environment where a plan can be formed based on the business’s best guess of what the final product will deliver along with some hard firm data about the productivity the team has been able to deliver in the past.

The next obvious question is how to ‘tell’ the parametric model that an agile development technique is being employed and how  to translate from story points to a more traditional unit of software size measurement.  To questions such as this I am quick to point out that agile development is a paradigm based on a set of tenets written by some pretty smart software dudes.  Different agile shops employ different practices and it is a thoughtful understanding of the software being developed and the practices that are employed that will lead to success when one is applying parametric techniques to estimate software cost and effort.  As with so many other aspects of software development, there really is no magic bullet to up-front  estimating of a software project, whether or not it is an agile project.  For more on this topic check out my paper "Are Parametric Techniques Relevant for Agile Development Projects

How does your business handle the potential conflicts between employing agile principles and creating credible plans for affordable and successful product deliveries?

(0) Comments

Cloud Computing and the DoD

Check out this article about the Defense Information Systems Agency (DISA) and their cloud computing strategy.  With the DOD’s ever increasing focus on affordability moving eligible capabilities to the cloud is an excellent plan for the government.  DISAs strategy includes the consolidation of data centers and network operations centers and the migration of 1.4 million Army email accounts to the cloud.

 Cloud computing allows organizations to utilize applications, platforms and hardware through the Internet (or some other network) rather than having to purchase or lease these items.  Cloud computing offers opportunities for cost savings through virtualization and consolidation.  Using the public cloud (the Internet) offers additional cost savings as there are many users sharing the costs of services – driving down the cost per user.

DISA has been designated as the Enterprise Cloud Service Broker for the DOD.  A cloud service broker is a third part company that manages cloud services for a cloud service consumer across multiple vendors and platforms.  Cloud service consumers generally have to deal with multiple cloud service providers – meaning they have to manage multiple relationships, multiple contracts and they have to deal with interoperability issues. The cloud broker mitigates this effect by understanding the consumer’s requirements and tailoring a solution that meets those requirements while dealing directly with the vendors to craft this solution.  Cloud service brokers are experts in understanding cloud services and knowing the specifics of many cloud service providers.   Darly Plummer, managing vice president and Garner Fellow at Gartner sees cloud brokerage as a “must have” for most organizations.

Has your company started a migration into the cloud and are they taking advantage of Cloud Service Brokers?

(0) Comments

COSMIC Function Points - how do they stack up?

The COSMIC method for counting function points arose out of concerns that the IFPUG (NESMA, FisMA) function points are too concerned with data intense business systems and subsequently are not adequate for adequately measuring the size of real time systems.   The COSMIC function point counting method has been designed to be applicable to both business systems such as banking, insurance, etc and real time software such as telephone exchanges and embedded systems such as those found in automobiles and aircraft. 

The COSMIC method uses the Functional User Requirements as the basis for the function point count.  A COSMIC Function Point count is based on a count of the data movements across the boundary.  A data movement is defined as the base functional component which moves a single data group.  There are four types of data movements:

  • Entry – moves a data group from a functional user across the boundary into the process where that data is required
  • Exit – moves a data group from a function process across the boundary to where a functional user requires it
  • Read – moves a data group from persistent storage within reach of the process that requires it.
  • Write – moves a data group from persistent storage within reach of the process which requires it.

To perform a COSMIC function point count, each component is broken down by functional processes and then for each functional process all of the associated data groups are identified.  Then for each data group with each process data movements are assigned and classified as one of the following types.  The purported benefit of COSMIC Function Points over IFPUG Function Points is that there is no upper bound on the number of data movements that can occur within a functional process.

I did a productivity study for COSMIC Function Points based on the ISBSG data base and found some interesting results.  Table 1 shows the results of my analysis and Table 2 compares productivity rates for IFPUG and COSMIC function points.

 Table 1

Table 1


Table 2

What I found was that on average, it takes almost twice as long to deliver a COSMIC Function Point than an IFPUG Function Point.  This may be because the sample sizes for IFPUG are larger or it could be because the types of projects that are using COSMIC are more complex, possibly lending credence to the notion that they are adequate for more complex systems.  What experiences have you had with counting function points using different methods?

(0) Comments

If you have to fail at least try to fail fast!

This week I’m attending the Better Software Conference in Vegas.   I just attended a great keynote given by Patrick Copeland of Google.  The topic was innovation.  He talked about how innovators beat ideas, prototypes beat prototypes and data beats opinions.  These sentiments are all part of the pretotyping manifesto

He started with the truth that most new products and services fail and proposed that while this is not unexpected, there is a good way to fail and a bad way to fail.  The good way to fail is to fail fast.  And this is where the idea of pretotyping comes in.  The idea of pretotyping or ‘pretendotyping’ is that you fake it before more make.  When a good idea hits, find the fastest, cheapest way to get something that will demonstrate and socialize the idea to at least some segment of the target marked.  A pretotype is different than a prototype in that a prototype is intended to prove that the product can be built, while a pretotype is intended to prove that the ‘it’ you’re building is the right ‘it’.

Here are some examples of pretotyping exercises.   When the idea of the original Palm “Pre” was first conceived, the inventor was concerned that the model wouldn’t catch on.  Would people be comfortable carrying around a device in their pocket, taking it out and making notes during meetings and during conversations?  Before building anything, he went to the garage and created a wood palm pilot and a wooden stylus.  He walked around for days pretending to check and record appointments and log notes – gauging reactions of people around him.  When Google held a workshop to brainstorm ideas for Android, they handed out post-its and pencils.  Apps were papertyped and ported around as real Android apps in paper form to determine if the concept was feasible – were they something the end user would be likely to stop and pull out their phone and use?     This notion of papertyping led to Androgen – an app that let innovators create quick and dirty implementations of their Android app ideas to test market them with minimal effort to get market feedback. 

One example presented where pretotyping would have helped.  Thirsty Dog Bottled Water for Pets  - no I’m not messing with you – this product was actually launched and marketed.  Maybe if there had been some pretotyping – taking some regular bottle water, changing the label and putting on the shelves in a few pet stores – the producers would have realized that this was an idea that wouldn’t fly.

So the next time you have a good idea you want to productize – before you start writing code – look for some creative ways to let the potential consumers assess whether it has the look, feel and comfort of use such that they might actually use it.  What techniques do you use to market test your ideas?  Leave a comment

(0) Comments

Programming Language Productivity

Ever wonder what programming languages are the most productive?  I recently did a little research into this topic using the International Software Benchmark Standards Group (ISBSG) database. The database contains over 5000 data points with size and effort data for projects from a wide variety of industries, applications and counties.  Of course not all 5000 data points were suitable for my investigation.  The software size is measured using functional size metrics but the database accepts projects that use various counting methods.  I narrowed my search to projects that used the International Function Points Users Group (IFPUG) definition of a function point.  The database also accepts incomplete project data, asking the submitter which of the following phases of software development are included in their submission: planning, design, build, specification, test, and implementation.  While many of the data points reflected complete lifecycle effort, those that did not needed to be normalized to enable a side by side comparison. 

I started by calculating average productivity (hours per function point) for each programming language that was suitably represented in the database.  This led to results that were statistically all over the place.  So I changed it up a bit – looking at average productivity rates within size stratifications.  This led to results that appeared much more reasonable to me.  And for the most part the productivities were as expected but one finding from this study was particularly interesting.  For pretty much every language the least productive projects were the smallest ones.  This seems counterintuitive because most studies indicate a diseconomy of scale in software projects.  The ISBSG database does provide for new projects and enhancement projects but it is not easy from the data that’s submitted to determine concretely how much reuse a project includes regardless of whether it’s new or an enhancement of existing software.  So this may be an indication that larger projects have more opportunities for reuse.  Whether or not this is true, the following table can certainly be used for sanity checks as one is estimating their software projects.  This is also a window into the types of data that are available through the ISBSG.  If you’re interested in a more detailed analysis of some or all of these 5000+ data points – check out what the ISBSG has to offer at



(0) Comments

Check out something I learned from the ISBSG Data base

Recently I have been playing around with the International Software Benchmark Standards (ISBSG) database for Development and Enhancement projects.  And in the interest of full disclosure I admit that I am more than a little excited to have close to 6000 data points at my fingertips.  I will further admit that there’s something quite daunting about having this much data; where to start, what should I be looking for, how can I best use this data to offer some useful guidance to inform software cost estimation.  For those of you not familiar with this data source, the ISBSG database contains data on software projects submitted from software development organizations all around the world. 

Naturally, the first thing I wanted to do was look at Project Delivery Rates (PDR) and try to find some interesting information about what might drive PDR.  Starting small (ish) I filtered the data to eliminate all those data items which the ISBSG has rated as of low or questionable quality and filtered all those projects which had included in their labor totals hours for resources not strictly part of the development team.  I started by trying to trend Functional Size with PDR.  The data was all over the map. In an effort to get some context as to where the good data might be lurking, I began to look at average PDR rates for each Organization Type.  I selected Organization Type because this seemed to be the most granular category that had some structure to it.  Organization Type is intended to indicate the type of organization that the software application is intended for.  Although an Application Type is provided, this value is free form with each submitter choosing their own terminology.  The same is actually true for Organization Type but because of the submission process there was a finite set of responses which could be used as a basis of stratification.  Although the ISBSG data is measured using functional size measures, the list of ISBSG acceptable Functional Size Measures is long and includes IFPUG, NESMA, COSMIC, Mark II, FiSMA, etc.  In order to compare apples to apples my analysis needed to focus on each size unit individually.  I started with IFPUG because this group contains significantly more data points than any other functional measurement category.  I thought I would share some initial findings.  The following table shows the productivity rates for various Organization Types. 

So what does this table tell you?   It certainly needs to be interpreted with care as you will note that for many of these productivity rates the distribution is all over the map.  For industries where the sample size is significant it gives some pretty interesting comparative information.  It also provides some useful information into the types of industries for which you can find data in the ISBSG database (not inclusive because this did not cover all the data points – only the IFPUG ones)  There is also the caveat that for different submitters have different ideas about definitions of things like industry type.  Despite that I think there’s something to learn here.  If you’re interested in data, and who isn’t you should check out the ISBSG’s offerings – they have a pretty cool arrangement using an OLAP interface to allow you to find and pay for only the data you can use.  Check it out at !! 

And I'm just getting started - check back for more observations on software productivity!


(0) Comments

Web Application Frameworks

When software developers first starting writing programs for the Windows ® Operating System it wasn’t pretty.   Everything had to be done from scratch – there was no easy access to tools, libraries and drivers to facilitate development.  A similar tale can be told by the earliest web site developers.   A web application framework is an SDK (Software Development Kit) for web developers.  It is intended to support the development of web services, web applications and dynamic websites.  The framework is intended to increase web development productivity by offering libraries of functionality common to many web applications.  In the early days web development was done in HTML with CGI (Common Gateway Interface) making it possible to create dynamic content.  As websites became more pervasive, in fact in many cases they quickly became critical to the success of a business, new languages were developed specifically for web development such as PHP and Cold Fusion.

Web application frameworks are merely the next generation of web development.  Instead of being just a compiler, the web application framework gathers libraries of functionality useful for web development into a single environment – offering developers one stop shopping for the tools they need to develop applications for the web.  There are many different web application frameworks for many different web development languages.  Some examples include JavaEE, Open ACS, Catalyst, Ruby On Rails and Symfony

Web application frameworks offer a variety of features intended to increase productivity of the web application development process.  Not all frameworks contain all of these features – [3] contains a fairly comprehensive analysis of which frameworks offer which features for many popular programming languages.  The most common features of a web application framework include:


  •  Caching – frameworks allow developers to build speed into their web applications by storing copies of frequently accessed data to speed up refreshes.  This can make the website appear to load more quickly, while also reducing bandwidth and load.
  •  Security – frameworks offer tools to address user authorization and authentication along with  the ability to restrict access based on established criteria
  •  Templating – frameworks offer the developer the ability to create templates for their dynamic content.  The templates can then be used by multiple data sets.
  •  Data persistence – frameworks often contain a set of features to support persistence such as consistent Application Programming Interface(API) for accessing data from multiple storage system, automated storage and retrieval of data objects, data integrity checks, and SQL support.
  •  URL mapping – frameworks often provide a mechanism to map URLs from a clean and uncomplicated looking url to one that leads to the right place.
  •  Administrative tools such as common interface elements to form fields such as a date field with calendar and automatic configuration which eases the storage and retrieval of data objects from the database

Web applications frameworks are intended to increase the productivity of those folks who build websites.  Other potential benefits include: 

  •  Increased abstraction - business logic can be separated from implementation details
  •  Increased time to market and development productivity
  •  Increased reused
  •  Enforced best practices
  •  Ease transition from one platform to another

Some or all of these features and benefits are realizable depending on the specific web application framework employed.  As with all advances in technology, there is necessarily an investment in not only the technology but more importantly in training and education.  It is not realistic to expect immediate payoff until developers move down the learning curve with the technology.

As always, it is wise to remember the lessons of Fred Bookes in "The Mythical Man Month" that there are no silver bullets.  While Web Application Frameworks features of enhanced tools and libraries will increase the productivity with which code is delivered, these remain the accidental complexities of software engineering; the essential complexity of crafting a software solution to a problem is not going away.

(1) Comments

Random Musings on Model Driven Architecture

Model Driven Engineering is a software development methodology focused on creating domain models that abstract the business knowledge and processes of an application domain.  Domain models allow the engineer to pursue a solution to a business problem without considering the eventual platform and implementation technology.  Model Driven Development is a paradigm within Model Driven Engineering that uses models as a primary artifact of the development process using automation to go from models to actual implementation.  Model Driver Architecture is an approach for developing software within the Model Driven Development paradigm.  It was launched in 2001 by the Object Management Group and provides a set of standards for creating and transforming models, generally using UML as the model standard. The intent of MDA is to separate business logic from its implementation in a way that is platform and vendor neutral while transcending technology.
MDA processes the software design through various levels of abstraction employing a series of transformations.  Starting with a Computation Independent Model (CIM) which essentially represents the user’s requirements with no reference or concern as to how or where those requirements may be implemented.  The CIM represents the business logic through models such as use case diagrams and activity diagrams.  The CIM is then transformed into a Platform Independent Model (PIM) which depicts the implementation of the business logic using a domain specific language (DSL) in a form that is not tied to a specific technology platform.  This provides a less abstract model of the behavior of the software without committing to the specifics of a particular platform. Sources of information for the PIM include artifacts such as class diagrams and sequence diagrams. The PIM can then be transformed to one or more Platform Specific Model for specific platforms, technologies or vendors.  Finally the PSM is translated into actual code that can be compiled and deployed.  Transformations rely on a set of mappings that inform the transformer tool with technology or implementation patterns.
Remember that MDA is an approach it is not in and of itself a tool.  There are however many commercial and open source tools available to support all aspects of MDA. Some examples of these can be found at [1]  The more automated the process is the more likely it is that productivity and quality benefits will be realized.   MDA tools come in many flavors and are used for all facets of model creation, manipulation, transformation and validation.  Types of MDA tools available include:
* Creation tools which create new models from user requirements
* Analysis tools which conduct analysis of models for completeness, cross checks,     metrics
* Transformation tools which apply patterns to transform from one model to another or to transform from a model to code
* Test and simulation tools which apply model based testing and create simulated executions of the models
* Reverse engineering tools which transform legacy applications into models
It’s not a huge leap to see how MDA could lead to improvements in both productivity and software quality.   Automation, high degree of abstraction, reusable artifacts and the use of standards all have potentially significant productivity implications.  It also seems obvious that these will not be realized without a substantial investment in tools, talent and training.  Some of the areas of potential benefit of MDA include:
* There are various reports that using MDA increases productivity of software projects. [2] reports on several experiments that for the most part found productivity increases, though the range of results across their experiments found -27% to 69% deltas.  The same paper sites Motorola results over 15 years of MDA practice of 2x to 8x though these are not across a common baseline. [3] cites a study where a 3 person development team using Optimal showed a 35% increase in productivity overt an equally experienced teams using more traditional methods
* Portability is achieved as one PIM can be transformed into multiple PSMs for different platforms
* The use of standards and models ensures that solutions are interoperable and of high quality.
As with all process and productivity improvement initiatives, there are also some potential pitfalls associated with MDAs
* In order to achieve productivity gains there is a necessary investment in tools and training
* MDA requires a specific and specialized skill set that may not be easy to find and may come at a high price
* Although platform independence is an important aspect of MDA, there are not universally applied standards for interoperability so vendor lock in is a possible problem
* In some cases there appears to be a gap between the vision of complete transformation and the reality of post transformation adaptations of transformation artifacts.[4]

As always, it is important to remember the lessons of Fred Bookes in [5] that there are no silver bullets.  While the MDA promise of automated transformations across many level so abstraction will increase the productivity with which code is delivered, these remain the accidental complexities of software engineering; the essential complexity of crafting a software solution to a problem is not something that can be automated.

[1] Software Pointers, 10 Hand-picked MDA Tools @ (Retrieved Feb 2012)
[2] Mohagheghi, Parastoo & Dehlan, Vegardm “Where is the Proof?  A Review of Experiences from Applying MDE in Industry” , “Quality in Modeling driven Engineering Project at SINTEF”, available at  (retrieved Feb 2012)
[3] “Model Driven Development for J2EE Utilizing a Model Driven Architecture (MDA) Approach Productivity Analysis, the Middleware Company. June 2003
[4] (retrieved Feb 2012)
[5], Brooks, Fred; “The Mythical Man Month: Essays on Software Engineering”, Addison-Wesley Publishing Company, Phillipines, 1975
(0) Comments

Who Knew - COBOL tops in security according the CAST CRASH Report

This week CAST released their second annual CRASH (CAST Report on Application Software Health) Report.   The summary findings can be found here . You will also find a link to the Executive Summary.   The report highlights trends based on a static analysis of the code from 745 applications from 160 organizations.  The analysis is based on five structural Quality characteristics: security, performance, robustness, transferability and changeability.  Some of the more interesting findings include:
* COBOL applications have higher security scores that other languages studied (meaning they have better security)  I personally found this finding surprising though it seems that the types of applications in their data set that use COBOL are mostly associated with banking and financial services so I suppose it has been fine tuned specifically for security concerns throughout its life

* Modularity minimized the effect of size on quality.  So while it has historically been true that larger software programs were likely to have higher defect densities – increases over time in the practice of high modularization have served to mute or mitigate this trend.

* The use of the waterfall development methodology produces code with better scores than agile for transferability and changeability – meaning these apps are likely to be easier to read, understand, maintain and address technical debt.

* Business applications have an average of $3.61 worth of technical debt for per line of code. – and this is, admittedly a very conservative estimate if you review the methodology used to calculate it
And these are only a few of the findings.  The report provides findings around technology, development process, modularity, software size, type of industry, release frequency and number of users.  You should check the link above to read the entire eye opening report or check out this webinar that summarizes it.
(0) Comments