Like any other product company, we use objectives and key results, better known as OKRs. They help us keep the focus on what is essential and drive us to go the extra mile.
For Q4 2019 we set out to massively improve the overall performance of our application. And these were the key results we set out to achieve:
- Reduce time in Gorilla UI with 75%
- Make calculations five times faster
MASSIVELY IMPROVE THE OVERALL PERFORMANCE
Performance is at the heart of Gorilla, so we decided to make it our focus again for this quarter. The results we wanted to achieve were not solely focused on the performance of the calculation engine but also on the performance of the rest of the application.
REDUCE TIME IN GORILLA UI WITH 75%
Our first challenge was to find a way to measure this. We could go for traditional metrics like sessions times, but we decided to take another route. This route would more efficiently pinpoint the inefficiencies we built into our product.
LOOKING BACK TO HOW USERS INTERACTED WITH GORILLA
We completely mapped all user flows our users had done over the past year and identified their preferred paths within our application. We also counted the number of iterations or repetitions the users would typically do.
We then discussed this with them and looked at our analytics to confirm this as well. Because Gorilla is an integrated solution, we also looked at all environments and throughout the full implementation process up until everything is live and managed in production.
Once we mapped and validated everything, we identified three ways to make things more performant:
1. AUTOMATE THE MONKEY-JOBS
For every process or flow that was done multiple times with limited difference, we questioned whether we could automate it or provide a bulk functionality
- 34% of performance improvements
We found 34% of performance improvements, but not all features have made it into production yet.
2. KEEP PEOPLE WITHIN THEIR CURRENT CONTEXT
We identified where people left a certain context to perform another task to come back later and continue. We made sure users could also do them without leaving the original context.
- 27% of performance improvements
We only counted the difference in the number of steps they had to do. We believe keeping people within a context allows them to keep focus better and thus lose less time getting back into it. So we add 2% for this.
3. REMOVE BALLAST
This one is hard as we questioned if everything we built so far was achieving its purpose. We found features that users were not using or using incorrectly. We saw that users didn’t need a certain feature to achieve their goals, so we decided to downscale it (and probably remove in the future) for the better of the product.
- 7% of performance improvements
With all this we only managed to find: 34% + 29% + 7% = 70% of performance improvements.
So there are still things to be done.
We found steps we could remove like “Are you sure you want to publish this?” but we felt we needed to keep them as our application is business-critical. We want users to feel confident to push buttons and take actions because they know we will warn them when there is a significant impact.
Instead, we decided to look to the future and improve the process of the future.
LOOKING FORWARD TO HOW THE USER WILL INTERACT IN THE FUTURE
To do this, we did time travels workshops. We extrapolated the content and process a full year forward (for gorilla this meant content x100 for some parts of the application and x1000 for others). When travelling forward some yearly milestones like the end of the financial year, audits, new prices published by the industry will also occur. So we looked at these as well.
First, internally and later, together with clients, we presented this amount of content and events happening. In workshops, we went through the motions of imagining how to tackle them using today’s functionalities in Gorilla. We captured a lot of hurdles, difficulties and future inefficiencies and translated them into features that will clear the roadblocks before they appear.
We are close to the end of the quarter so these new features will unfortunately not be released this year but they are defined and ready for development.
CONCLUSIONS KEY RESULT 1 – REDUCE TIME IN GORILLA UI WITH 75%
- 52% out of 75% achieved and proven
- 18% on its way and will be released soon
- improvements for future usage roadblocks defined
To conclude, we can say this key result is very successful, and the team went above and beyond to achieve this!
MAKE CALCULATIONS FIVE TIMES FASTER
This one proved to be a lot harder than the first one. Measuring this was easier as we already had a set of benchmark calculations ready. Getting those results better and getting them in production and stable was not an easy thing to achieve.
We are working on engine performance from the inception of Gorilla as that is part of our core offering. So we already encountered some of the difficulties of trying to create a very performant but at the same time flexible calculation engine. Some of the learnings we previously encountered were:
1. DON’T TRUST YOUR ASSUMPTIONS
We are working with some very bright minds here, whom all have excellent ideas to make everything more performant. Hence, the easy wins in performance are the basis of Gorilla, everything we add from now on we base on more in-depth & detailed knowledge of how things operate and interact as well as theoretical understanding of its foundations. These conceptual improvements have, however, often failed to deliver in practice and resources were wasted on thoroughly implementing improvements to only then notice that theory and the real world are not the same. To avoid this trap, we made this one of our guiding principles going forward.
2. EXPECT THINGS NOT TO WORK AS ADVERTISED
We are working with cutting edge technologies, and we learned they have a lot of good stuff to offer, but they are also entirely new, so we have encountered many issues with trying to implement them at scale. These technologies make a lot of promises, but we noticed that sometimes these are only true in very controlled environments for particular use cases. Making them not usable for Gorilla, which requires a lot of flexibility. Making this our second guiding principle.
Based on these two guiding principles, the approach of achieving our key result is pretty straightforward.
THE ASSUMPTION TREE
We started with creating an assumption tree with different paths to better performance. Each node in the tree is an assumption of something that could improve the performance. If we get positive results, we can move on to a subsequent assumption to build further on that improvement. If the hypothesis proves wrong, we need to move to a different one.
Each of the assumptions translates into a Spike with a clear hypothesis and a set of expected outcomes. We chose Spikes because they are for research rather than development. The Spikes would then be picked up by our engineers in 2 stages. First, validate with desk research whether it would work, find others who tried it and see if they succeeded or what the difficulties were = validate our assumptions. After that, they would make a small proof of concept to prove that in practice, it would work for our use case = validate if things work as advertised.
Based on the results of every Spike, we also updated our assumption tree with new assumptions based on what we learned.
That part went quite ok, where we kind of failed initially is that we stayed too long in this mindset and were not bringing any of these improvements to the actual product. There is always one more assumption to test and the “If we implement this now we need to change it later because of this other improvement”. We went a little too deep for a moment and needed to zoom out. Fortunately, we realised it eventually and decided to release some improvements knowing that it might change in the future. At the time it seemed like a hard decision but looking back, everyone was relieved we could ship something and had a bit more peace of mind to continue on the next assumptions.
Below are the four areas we wanted to improve on with some of the questions we asked ourselves.
1. FORMULAS
How can we make the formula more efficient and performant without losing its transparency? What are the bottleneck operations in regards to memory usage and performance, and how can we improve them?
2. MEMORY
How can we enhance (and thus lower) memory usage to increase performance? What operations cause memory peaks, and how can we improve them? Do we need all the data we get? Can we limit the data we take in? How can we automate it?
3. WORKERS
What are the ideal workers to use for each use case? What is the perfect configuration and amount based on the calculations? How can we automate this? What is the ideal starting amount vs scaling performance?
4. CODE
How can we improve the code underpinning the formulas and make that even more performant? What code is optimal for what type and size of calculation? How can we automate this?
CONCLUSIONS KEY RESULT 2 – MAKE CALCULATIONS FIVE TIMES FASTER
- 5X performance improvement achieved for some of the benchmarks
- Smaller performance improvements achieved on other benchmarks
- Learned so much while doing this and realised we have much more to learn
We didn’t quite get to the 5x performance improvement overall we were aiming for, but we feel confident in what we delivered so far and know what to do to keep moving this forward.
It had its ups and downs, but the clear objectives and key results kept us on track and made sure we did not lose our focus. It pushed us to deliver a set of features that bring enormous value to our customers.
All credits to everyone on the fantastic Gorilla team, without any of you this would not have been possible.
And a great thank you to our customers and partners for the great collaboration.
As we speak, we are shaping our OKRs for Q1, and exciting things are coming! We will have our work cut out for us!