Why are we talking about pilot projects? Because in legal technology, sadly a lot of software is bought but often not used. Which is why we recommend that firms try before they buy. We do this by encouraging law firms to take part in a 4-week structured pilot programme with us, so their teams can get a good sense if Donna is the right tool for them.
Today we want to share with you our best tips on how you can run successful pilots and evaluate software like a pro.
But first, does this sound familiar?
John, one of our M&A lawyers has expressed a very specific problem and it looks like your tool could be a good solution. Can we do a 6-month pilot with all our lawyers? We will just need all our partners to sign off on it. By the way can you do onsite deployments? Oh, and John won’t actually be able to join. He's currently working on another project for the next few months.
It might seem obvious, but we've heard this one plenty of times. Which is why we've collected these tips for you to try before you buy your next piece of software. We’ve even arranged them into an easy to remember mnemonic, PILOT (People, Incentives, Limit Scope, Outcomes and Transparency).
People are the engine 🏋️♀️
People are the single biggest factor in the success of your pilot. Are you picking participants or are users self-volunteering? There’s a big difference in results between the two. We usually see the best outcomes when users self-volunteer, because they are normally more passionate about a problem. Funnily enough when you let users self-volunteer you might also be surprised to see assistants, paralegals and others ready to engage and be really valuable pilot participants. So how do you get your staff to volunteer? Have the vendor record a 1-2 minute pitch video, and share it across your firm. We also sometimes combine this with a very short (3 question) survey to get the best results.
Once you have your volunteers ready, make sure that they are clearly given the status of a pilot user and that they have the time to commit to test the product. We do this by exclusively excluding anyone who is unable to complete a short onboarding call with us in the first week of the pilot. If someone is unable to dedicate 15 minutes on a call, then they probably don’t have the capacity to really test out a tool and provide valuable feedback that will help you evaluate the product.
Incentives are the fuel ⛽
If you want your firm to be more innovative, look at what systemic incentives might be working against it. One example we see is with IT teams, who are often incentivised by their response time to support tickets. In this case bringing in a new tool might actually have a negative impact on their metrics. The same goes with your lawyers, are there incentives beyond the billable hour that encourage innovation? Remember incentives don’t always have to be monetary - we’ve seen firms send their IT staff to industry conferences or allow their lawyers to work on a pro bono case, just as a few examples.
Limit the scope 📅
The goal of the pilot is to learn, not to implement a solution, that comes later, if ever. That’s why we encourage a 4-week pilot program. We usually, spend the first week getting everyone onboard, week two and three are about really testing the tool and the final week is to provide feedback.
You can also make it easier for your team to test without having to worry about oversharing confidential information. Spend some time cleaning up some sample documents and distributing them as test documents. This is a great way to enable anyone really to go out and try new tools.
It's better to spend a few weeks playing around with 3 different products, then it is to spend 6 months piloting a single product that you may not even end up using. You’ll rarely find that single unicorn solution that solves everyone's problems. Instead focus on smaller solutions that solve someone's problems.
Outcomes that are trackable 📝
If you want to see a real impact from your pilot, figure out what you want to measure before you start, because what gets measured gets managed. At a minimum you should at least track usage during the pilot. But a real tangible outcome might look something like “those who used this product for more than 5 hours a week saw a 2% improvement in their utilisation rate’. Now 2% might seem small but think about this compounded over the course of the year and across an entire department. And remember big vendor promises often fall short.
Transparency is key 👓
Transparency matters both when it comes to the vendor and with your own team. The great thing about software today is that you can see all sorts of metrics. Starting with which features are used right down to who is using them. So, hold your vendors accountable and insist that they share these key usage metrics with you.
Don’t forget to encourage transparency within your team. This will help to encourage innovation in the long run. Make sure you are sharing key metrics and learnings from a pilot, especially when something goes wrong. Involve your team in decisions big and small, you might be pleasantly surprised with the suggestions and outcomes.
We hope you get a chance to read this before you start evaluating the next new legal tech tool. So you can be confident in the success of your trial and that you’ve made the right decision.