For Data Science or Software Development, Agile just makes sense.
The Standish Group’s annual CHAOS® reports show that in the last 25 years the software industry has doubled the percentage of projects that are successfully completed. Success defined as on time and within budget.
Sounds great! But only because the 16% success rate of early 90’s makes the 29% success rate of projects in recent years look good.
In 2001, The Agile Manifesto® was published and our industry changed how it executes software projects. We can credit a great deal of improvements in project outcomes to the adoption of Agile methodologies. In a nutshell, we shifted to Agile instead of Waterfall so that we could respond to new priorities or change direction based on what we learned during the project.
A software or data science project isn’t analogous to building a house. Homebuilders can calculate upfront how much lumber, drywall, and even nails the project will need. Instead, software and data science projects are a process of exploration. We’re not following the switchbacks of a groomed trail with clear signs and known distances. Instead we’re blazing a trail to a peak no one has ever stood atop. We have a description of what the destination looks like but the terrain between is shrouded in fog.
Everyone wants to know, how far is it? How long will it take to get there? It’s impossible to answer those questions at the beginning of a project. Agile provides a framework to start exploring, adjust to changing conditions, and plan a series of iterative steps toward our goal.
Given that less than 30% of projects succeed, what can you do to make sure your software or data science project meets scope, schedule, and budget expectations?
To start, focus on meaningful but achievable objectives. Instead of kicking off the “I’m running the Boston Marathon” project, start with the “I’m going to run a mile twice a week” initiative. The solution you put in place should align with the core problem. What issue are we trying to solve? Do we want to improve fitness? Lose weight? Participate in an activity consistent with our lifelong love of running?
At the end of the week, evaluate the results (retrospective), and based on what you’ve learned, plan the output (features) for the next two weeks (sprint). Depending on the problem, the next step may be to up the mileage, increase the frequency, or bring on more runners. Or we may find running doesn’t address our core issue. It’s a good thing we didn’t already tweet about our marathon plans.
When the path to success is unclear, Agile allows us to plan small but meaningful groups of work in time-boxed iterations. This allows the team to consistently move closer to solving the core problem.
Fixed Fee / Fixed Deliverable Contracts Aren’t Very Agile
Agile was born out of the frustration of failure of traditional project planning and execution methodologies to deliver software projects.
The main components of Waterfall - Plan, Develop, Test, Deliver -work well with deliverables such as a house, a bridge, or a battleship, but fail when applied to software.
You could be working through the phases successfully. You're meeting milestones and phase gates. But what if the market changes, or the core problem is suddenly not a priority any longer? The result is months of effort with no return on investment.
That's why we use Agile methodologies for software development and data science projects.
While the delivery methods have changed in the last 20 years, vendor contracting still treat data science consultancy like an office goods supplier. The conversation isn’t quite “Yeah, can I get a quote on a case of data science? What’s the quantity discount if I buy an entire pallet of data science?” There is no answer to the question. Data science or application development is only the tool a consultancy uses to bring solutions that solve real world business problems, not the overall program goal.
How does Agile delivery work in a fixed fee, deliverable or milestone consulting contract? A fixed fee contract is designed to transfer risk from buyer to seller. To mitigate that risk, the seller tightly constrains the deliverables and places a risk buffer on top of the cost estimate. The result is both parties spend weeks negotiating the initial contract. Then more negotiating for change orders, delaying the project even more. This adds overhead to both parties. It's not very Agile. In fact, it contradicts one of the four principles of the Agile Manifesto: customer collaboration over contract negotiation.
If we accept that Agile is the preferred method for delivering data science and software projects because it is more likely to be successful than other methods, doesn’t it follow that we should incorporate the core principles into the entire business relationship? That we should collaborate rather than negotiate? If so, we need a contract framework that provides a win-win solution for customer and consultant.
At Blue Vector, that framework is the Agile delivery team model.
We bring a flexible team of resources that can be rotated on or off the project as Sprint objectives dictate. This allows multiple team configurations to address program goals such as data modernization, application development, or data science and analytics.
This fixed capacity, fixed fee model, where both parties work together to plan the work and the outputs of the sprint are the value of the engagement.
It Doesn’t Take Much Effort to Realize Story Points Are a Measure of Time
At the risk of being excommunicated from the Agile community: we can insist that story points should be a measure of relative effort as much as we want, but everyone (and I mean all of us) convert effort into time, since effort/capacity is duration. What’s the golden story against which we rank the relative effort of all other stories? How is the effort of a story about setting up SQL tables relevant to effort estimation for a data science story? What about new team members who just finished with a team that had a completely different golden story representing one story point?
I like to use a natural estimation approach, with an estimating technique everyone can relate to:
People naturally estimate the amount of time they think it will take them to perform a task. When we try to disconnect effort from time we create an artificial construct that many people, especially analytical people, struggle with.
To deal with the issues created by pretending story points aren’t an estimate of time, we use work arounds like planning poker, t-shirt sizing, voting, or voting by ticket.
Try the “That will take me” technique and see how much faster your team comes to agreement on story points. Using this method, it becomes easy to calculate baseline velocity for your team before the first sprint begins.
A team member can't working only on the stories, there are meetings, email, ad-hoc discussions. Rather, a team member dedicated 100% to the project should have a capacity of roughly 8-13 story points per week or 16-21 per two-week sprint. Start with somewhere around 18 points per team member.
The reason we estimate in the first place is to determine what features we can deliver in each sprint. The most important characteristic of any system is accuracy. Compare this method to traditional sizing methods and see if your team isn’t meeting their commitments more often.
Pretending story points aren’t about time is a direct contradiction to the Agile Manifesto’s principle of Individuals and interactions over processes and tools.