Digital transformation in the delivery of public services is occurring at pace. It has huge potential to unlock efficiencies and modernise how government works. This series of six mini-webinars is aimed at legal, commercial and policy teams within the public sector that have historically focused on non-technology requirements but that are increasingly finding themselves being pulled into the digital sphere when procuring and delivering public services.

This series will help equip you with the key concepts, risks and challenges that you are likely to face on your journey to digitalisation. We will look at some of the nuances that apply when procuring and contracting for Software as a Service (SAAS), Artificial Intelligence (AI), agile development, disaggregating IT services and data sharing.

Transcript

Alexi Markham: Hi everyone, thanks for joining us today. We are focussing on digitalisation within the public sector. In terms of why we picked this topic for this audience at this particular time, what we are seeing as legal advisors to the public sector, is a general uptick in the number of procurements that we are being ask to support upon which relate to digitalisation programmes. And that is not really a surprise given the efficiencies that digitalisation can unlock and its ability to modernise how Government works. However what we are also seeing is that digitalisation is becoming an increasingly important component in services that we would not have traditionally thought of as technology focussed requirements or technology procurements. And as a result, we are seeing legal, commercial and policy teams who have historically focussed on more generic services being pulled into that digital sphere and the difficulties there as you come up against them, nuances when it comes to contracting for technology which are fairly unique to the technology market.

So what we wanted to do is basically focus on the basics. We are going to be looking at a range of topics to try to equip you in going into technology based procurements or procurements with a large technology component. So we will be looking at things like, what is it we actually mean by digitalisation; what are the types of products that you are likely to need in order to support a digitalisation programme and where typically are you likely to purchase those. We are looking in more depth at things like SaaS disaggregation, agile development and the procurement of AI and then finally we are just going to finish up on data.

So without further ado I will be passing over to one of my co-presenters, Jocelyn Paulley who is a Partner in our IT and Outsourcing Practice. We have also got Matt Harris who is a Principal Associate in our IT and Outsourcing Practice who will be joining us on some of the later sessions in the series. Joss.

Jocelyn Paulley: Thanks very much Alexi. So as Alexi said, we are taking this back to basics looking at what do we mean by 'digitalisation'. A lot of people do not like technology terminology because they always wonder what is behind it but I think the comfort with this term heading, it's not really one of science it is more one of art although when you start to dig in to it (if you just move on to the next slide), people like Gartner have put more structure around it and I think it's helpful just to look at some of this terminology so we have a collective idea of what we are focussing on.

So the first step people often talk about is simply digitising the information that you have. So it simply means taking something that used to be in paper form or analogue as opposed to digital and putting it into a digital form, entering it onto a computer. So you could think of something like digitising of medical records which has been happening over the last few years, simply turning the paper notes that doctors used to take into information that is typed straight into a computer and therefore a digital system. And if you think of this as a journey and as a progression to making more and more thinks that we do digital, the next step is then what Gartner calls 'digitalisation' which means taking that digital information you now have on a computer and putting more processes and information round it to actually transform a whole operation, so the way that something works. So an example there might be something like e-commerce. You used to walk into a shop to buy something but now an alternative route you can carry out that same process, that same transaction, but through a digital medium, through using a computer and software.

The digitalisation generally allows processes to be automated and streamlined because it is all happening over a digital interface and then the fullest realisation of that process is then what we tend to refer to as 'digital transformation'. So this is really deep embedded coordinated changes of multi aspects and facets of an organisation that transforms user journeys and business propositions and values. So it is really taking a holistic view of an organisation and changing lots of things all at once.

Who is driving that within Government? Well the Government digital service is charged with digital transformation and realising those ambitions. So they are looking at better connected platforms, common platforms for better flow of data to bring more efficiencies, to make processes flow better, so platforms live Gov.UK/Notify, Gov.UK/Pay, Gov.UK/designsystems – those are all Government led initiatives to bring digital transformation into Government.

So onto the next slide. As I say digital transformation is not just about using software, it is taking a holistic view of an organisation so that you use digital tools, software and cloud and digital interfaces. You change the processes so that they are digitally led, they are based around how the software works and that flow between individuals and different departments. And you give the people within your organisation the right digital skills to interact with those digital tools so that they can change the way they work and have it based around using digital interfaces.

In these sessions we are going to focus just on the tools aspect of digital transformation that you need to bear in mind if your organisations are talking about 'we are going to undergo a digital transformation project' but the tools is only one bit and actually if the processes and the people do not change as well then it's not going to be a successful project because this is about all those initiatives working in tandem.

So I said we are going to focus on the tools but we are going to take a really sky high view of this at first because if you move to the next slide the point here is that there are lots of different types of contracts related to technology that you might need in the course of digitalisation and this list is by no means an exhaustive list, it is just some of the types of terminology that you are likely to hear so I wanted to give you a bit of familiarity with those. So if you work from left to right, starting on the left with what I might call a traditional software licence and support agreement where the software used to come on disks and you would get a copy of a disk to download your software onto your laptop. You do not see so many of those nowadays. Through to design, develop and operate agreements by which I mean specific bespoke made pieces of software that are designed for very specific requirements and within Government you are one of those sectors where you have some truly unique requirements for digital processes, so you might have an agreement that has some design workshops; building of requirements, there is then a build phase where the supplier creates what you want; you test it and then it falls into operate i.e. the ongoing running, hosting and maintaining. You could slice and dice that down into different ways. You could have an agreement that is simply about the support of a digital system or is simply about hosting it. I am sure you will be familiar with Microsoft Azure and AWS as significant hosts of applications and systems.

And at the right hand end we have the type of agreement that governs a lot of technology procurements now which as a service, software as a service, platform as a service, infrastructure as a service or a deed X signifying anything as a service, is the modern way that most technology is bought so that you are not owning assets and the costs involved are not CAPEX costs these are OPEX costs where you pay to have access to a digital service but where all the assets and ownership sits with a third party. There are pros and cons to that of course, it means giving some suppliers maybe too much control or a lot of control and it means that these services are commoditised so there is limited ability to change the risk balance or the way that service operates. So whether or not you buy a particular technology solution becomes a procurement decision as opposed to something you can negotiate. But that is the way a lot of modern technology is built and increasingly it is also, it has the ability to integrate very easily with other solutions. So where as we have seen trends over time to owning everything yourself to going everything outsourced and into the cloud, I think we are now coming back into a smarter space where people procure in different ways for different types of technology depending on the risk profile of the software and the data involved.

And then along the bottom, we have some other types of agreements. We have got a classic IT outsourcing, all very familiar with outsourcing trends and criteria. So it is saying what it is and the kind of IT function that you wish to outsource. Master services agreement for professional services consultancy migration projects and then systems integrator agreements which is when you have bought lots of different elements of technology and you need to stitch it all together to give you your architecture that you are going to operate from. Very relevant in the old world, still relevant in a cloud world even though services are more inter-operable and using API technology you often still need someone in the middle to bring a system together to make sure all the data flows connect correctly and work as they should.

Alexi: Right thanks Joss. And so having had a look at the different types of contracts that you may find yourself procuring we thought it would be helpful just to have a quick look at the some of the bigger routes to markets for Government IT requirements. Hopefully you are all very familiar with the Crown Commercial Services frameworks but what we are touching upon here is some of the key Crown Commercial Services digital and technology frameworks. And I think there is a whole raft of them, I think there is over 64 currently but the spend tends to be concentrated in around six key frameworks and we have got the top four of those highlighted on the slide here. So G Cloud is by far the largest of the frameworks and that is the go to brainwork of purchasing cloud based computing services particular COTS type solutions and that could be cloud hosting, it could be cloud software, it could be cloud support. The number there, £3.5 billion, is the number published by CCS as the current spend in the financial year so it is a really large and busy route to market.

The second one there, digital outcomes, is the go to framework where you need someone to design and build a bespoke digital product for you, typically using an agile approach. There are other lots on this agreement which you can use to find studio space to conduct user research or indeed to find users with the right characteristics to conduct your research and test your service. And the spend published by CCS for this particular framework this year is £1.6 billion, so still a really chunky budget being spent through that framework.

Technology products and associated services caters essentially for a quite a wide range of hardware products and software products and also associated services so things like end user support service desks, integration. Unfortunately CCS only publishes data for G Cloud and digital outcomes and so I did sort of scratch around the internet to try to work out the spend on an annual basis but the best figure that I could come up with is actually the amount that is put against the technology products and services to framework where over four years there is an estimated spend of £8 billion so you could take that as £2 billion a year potentially.

And then finally network services. Again this one is projected to have a £5 billion spend against it if you look at the amount advertised for Network Services 3 and this is the route for accessing network infrastructure and communication services so that might be cloud services, audio and video conferencing, radio, satellite networking. But I think the main takeaway here is there are some very heavily trodden paths to markets, so some of the key requirements then that you are going to require through the CCS frameworks.

Matt Harris: Thanks Alexi. I am going to provide a high level open view of Crown technology before focussing on the key features and the principles of SaaS and closing with a section on key issues in SaaS contracts.

Broadly speaking Crown computer describes the provision of a service over an internet connection. On the slide you will see three boxes. Each box includes a reference to a form of cloud computing. In this introductory session I will deal with each model at a conceptual level. In practice, the way in which each model is deployed may be more nuanced.

The first box is infrastructure as a service. In the context of infrastructure as a service, the customer is procuring the use of a third party's hardware and computing capacity. The customer will deploy its own applications and will be responsible for updating and operating those applications.

Moving along to the next box we have PaS or Platform as a Service. Here the third party is providing more nuanced service in that it is providing more than infrastructure as a service. No only hardware and computing capacity but also they might include the operating system, execution environments and databases. The customer will provide the applications.

Finally, Software as a Service or SaaS. The third party is providing the hardware, storage, operating system and the applications. The customer simply receives outputs or the benefits of those applications.

The key takeaway here is this: generally speaking, as you move from infrastructure as a service through to SaaS, the customer's input and level of control regarding infrastructure and software applications decreases. This invariably affects the content of the applicable contract.

In the next portion of the session we are going to focus in on SaaS. There are various forms of SaaS, in full or pure SaaS to hybrid. For the purpose of this session we are going to focus in or the concept of SaaS or pure SaaS. Software as a service is not new, the concept has been with us for many years. Market standard approaches to contracting have developed to reflect the way in which the technology operates. Now it is important to keep these in mind when drafting or reviewing contracts.

On the screen is a summary of some of the basic principles that apply to SaaS. Firstly, the customer has a right to use a service that relies on third party hardware, computing capacity and applications. Secondly, the provider has control over those applications. This enables the potential for a provider to offer a one to many service. And finally, the provider may store and process customer data within their own systems in order to provide the service to the customer.

In this final section I am going to focus on key issues to look out for when drafting or reviewing SaaS contracts. Building on our discussion around principles, the customer is not going to own or access the software. For this reason the customer simply requires a narrow licence or a right to benefit from the services provided over the internet. This is a key difference between an agreement for SaaS and installed software.

The provider is responsible for the development of the SaaS and is the owner of the elements required to provide it. It is this approach that allows the provider to offer a one to many service that develops over time. Therefore a SaaS agreement is highly unlikely to include a detailed specification and commitments by the provider to provide the services in accordance with that specification. This approach would potentially constrain the extent to which the provider can offer a one to many service and also develop their SaaS offering. For this reason it is more typical for a provider to commit to provide the SaaS in accordance with a broad functional spec and to have rights to augment service over time. Now this approach can be troubling to customers. Customers can be worried about the way in which the provider will develop SaaS and for this reason it has become commonplace for customers to seek a commitment from the provider that changes to the SaaS will not materially degrade or weaken the functionality offered by the service.

In the context of pure SaaS the provider is offering a one to many service. Now it might be possible for the customer to configure that service but development of the core SaaS for a particular customer is less likely to be accommodated unless that customer is willing for the developments to be made available to all of the provider's customers.

Customers may well upload confidential information into SaaS platforms therefore it is really important to ensure that any agreement for SaaS contains robust confidentiality obligations. And further, depending on the purpose of the SaaS, personal data may well be uploaded and processed through the SaaS platform in question. Any agreement for SaaS which involves the processing of personal data must include appropriate provisions to govern the processing carried out by the SaaS provider. And by extension this may mean that a customer wishes to explore the security commitments that a SaaS provider can offer for example requirements for the SaaS Provider to hold certain accreditations such as ISO or Cyber Essentials. The purpose being to give the customer comfort about the security measures that the provider will take to protect the data that it uploads.

And finally on this slide, SaaS providers are effectively offering an online service through which a customer's data can be processed and those providers are broadly speaking not going to moderate the content that is uploaded or take responsibility for that content. So therefore it is commonplace in SaaS contracts to see obligations on the customer to avoid or prevent the introduction of certain information into the SaaS platform for example offensive or illegal material or material that infringes the intellectual property rights of a third party. And it is pretty common as well for a provider to seek indemnity protection against claims arising from the introduction of such material into the platform. In the context of SaaS the customer has no access to the software itself. For example it cannot influence the code that drives the SaaS and in turn cannot control whether that code infringes the IP rights of a third party. Control over the code sits squarely with the SaaS provider. For this reason it is market practice for the provider to indemnify the customer against third party claims that the SaaS infringes the IP rights of a third party. Now the devil will be in the detail and the scope of the indemnity will determine whether the protection provided will be sufficient.

The customer's ability to access the service is largely dependent on the provider. For this reason it is important that any SaaS agreement includes appropriate commitments from the provider in relation to the availability of the service. It is market standard to see service commitments concerning availability i.e. a commitment from the provider that the SaaS will be available x% of the time. It is important for a customer to check how availability is defined and also the carve outs in any availability commitment. Robust SaaS contracts will also detail the remedies that apply in the event of access level breach such as service credits. Customers may also look to secure termination rights that apply in the event of persistent or catastrophic breaches of the service levels.

Due to the nature of the service a customer will be reliant on the provider to offer support should an issue arise in relation to the SaaS. A comprehensive SaaS agreement should include detailed provisions relating to support for example detail on how to log a case and even who might log that case through to commitments from the provider on response times.

Finally the agreement needs to consider what happens when the relationship ends after all the SaaS may contain the customer's confidential information or personal data for which it is a controller. A customer needs to have certainty over how its data will be returned on the expiry of termination of the agreement for example the mechanism for the return of data and the timescales. It may be that assistance is required from the SaaS provider to facilitate this process or that self-service extraction tools are available. Either way the customer needs to think ahead and ensure the agreement captures the appropriate process.

Thanks very much for listening.

Alexi: We are going to be looking at programmes which disaggregate technology requirements.

So Government policy as to how they like to purchase technology requirements has evolved over the years and that is largely to reflect market risks and changing technology. So ten years ago if you had looked at how Government was purchasing its IT requirements the trend was to look to a single provider and then you wold outsource the entire service too and often those were for quite long periods for example contracts of 10 or 12 years were not unusual.

However back in 2010 there were a series of quite high profile failures and I think the largest of those probably was the £12.7 billion National Health Services national programme for IT and that caused the Government to take pause and carry out a review into how it procured IT and the conclusion from that review is that in many cases these huge monolithic contracts no longer offered value for money and they often constrained the relevant departments or other public sector organisations from modernising their environments and services in the way that they wanted to.

So this then led to a change of policy back in early 2010 and instead of looking to a single prime contractor to deliver their entire estate the move was to a much more multi-vendor disaggregated environment and adopting a cloud first principle. So what you can see on this slide is a breaking up of that one big outsourcing contract into a number of smaller pieces. Often these were arranged in what were called 'towers' broken down by different technology types so you would have a hosting tower, a connectivity tower for example and you might have a lead provider sitting on top of those towers sort of managing the different services within there.

As managing multiple suppliers and ensuring a seamless service integration is a really difficult task it became quite popular to have a service integration management provider so they were tasked with helping the Government to integrate and manage these silo smaller outsourcing contracts and that is what you can see on this slide.

And then more recently the trend has evolved again to bring that service integration role in-house so now we are seeing a trend towards having an intelligent customer function sitting within Government so they are much closer to the individual services.

So that is the summary on the policy. In terms of disaggregation versus non-disaggregation, so multi-vendor environments, I think it is fair to say there are advantages and disadvantages to both approaches.

Picking up on the advantages first being a glass half full type of person, breaking that large requirement down into smaller requirements certainly makes a lower barrier to entry to the markets so it is much better for the SMEs and it allows Government and the public sector to access some best in class specialist providers that would have been put off or unable to compete due to the size of the requirements historically. Also it is good for reducing risk exposure. If we think back to 2015 / 2017 and 2018 indeed, we saw a lot of issues caused by the collapse of Carillion and the number of public services that were being delivered by that entity. So by splitting the contracts down into lots of smaller providers you are essentially spreading that risk of insolvency and market risk. It also introduces more flexibility so in a disaggregated model you are better able to switch elements of supply as technologies evolve or if you have got concerns with any particular supplier. And also having that more direct access to the supplier brace it can give benefits like reduced reductions in costs so you are sort of cutting out the middle man and reducing hidden costs. And it also allows people within the department or within the Authority better visibility of the underlying service and that is better if they understand the problems and have more control over the systems.

So that is the advantages. In terms of the disadvantages, the obvious one is when you have one aggregated prime contract you essentially get the one rope to choke and you are putting the risk of performance across the whole of the end to end solution with that one supplier. However when you disaggregate you get the opposite so essentially the customer is taking the risk ensuring that they selected the correct product and service and taking the risk of making sure that all of the different products and services within its entire end to end IT solution worked together. So disaggregation leads to a customer risk in product selection, it also leads to increased management and monitoring burdens. So here instead of having one neck to throttle you essentially find you have got lots of necks to throttle and there may be more necks that you have hands. So there is a bit of mind set shift that has to happen on a disaggregation so instead of running one procurement every ten years you end up with a procurement continually reviewing, managing and amending contracts and indeed procuring new contracts. And that takes a lot of time, energy and a very specific skillset so you will find that you will have to deal with and answer detailed questions which can be very technical questions and you are going to need a multi-disciplinary team to do this so you need experts in the field of IT, commercial, project management, legal, HR, finance, etc. And what experience has shown is that it can take up to four years to achieve a stable end to end service provision so it is not to be underestimated in terms of time commitments and technical capability requirements.

The third issue here is around the lack of contractual proximity between providers. So as we have said, you will have a series of different contracts for different products but what you really want is an end to end solution so you need those suppliers to work together. However they are not contractually in a relationship with each other and there is various ways where you get around that in terms of trying to encourage collaborations. So that might be things around putting in place partnership charters or putting in place common KPIs but equally the technical things that need consideration in terms of things like sharing codes so you need to make sure that you are getting the correct intellectual property rights into your contracts to allow for that. And equally about data flow so building APIs that allow one product to talk to the next product.

The fourth issue which is kind of related is around integration and operability. So we touched on the fact that we need our data to flow throughout our system and to do this you need integrations and you need interruptability so integration we normally do that through APIs and interruptability would be through having common standards.

The fifth point then is just around operational security support and service gaps between providers. And it's a similar point to the last one, everything needs to work seamlessly and that just takes quite a lot of thought. So in things like support, when issues come into your helpdesk, have you planned out how it is going to work and is that reflected in your contracts. And it's one thing building it into your contracts now but in four years' time will people still know how everything is designed to work in terms of things like security, do we know who is doing what. And what happens if this is not managed carefully you can find that you get loss or security breaches especially where you have got multiple points of access to customer data or you need to transfer data amongst several suppliers. There is also having numerous supplies increases your vulnerability to a potential cyber and other security threats. And there is that operational risk around your end to end instability and the management of multiple different service level regimes. So quite a lot to be thinking of when embarking on disaggregation.

Matt: Thanks Alexi. To help us get to grips with what we mean by agile, let us start with the traditional waterfall approach. Essentially waterfall development relies on the completion of a series of steps cascading from one to the next. The approach is linear meaning that the current step needs to be completed before you can move on to the next. We have put on the screen five steps that are commonly associated with waterfall development.

Firstly requirements – defining requirements for the project or the software and setting them out.

Secondly design – taking those requirements and developing a specification effectively agreeing what is going to be built and how it will be built.

Implementation – this is essentially the building phase against the spec and from that phase follows verification and testing. Essentially verifying or testing that what has been built aligns and complies with the spec.

Then finally, deployment and maintenance of what has been built.

Now there are a number of limitations associated with waterfall. Due to the linear development path there is a requirement for the customer to have a clear understanding of what should be developed at the outset of the project. The needs to bring flexibility, adaptation and creativity. Another challenge is that verification and testing is not necessarily continuous meaning that issues may not be discovered until a significant amount of development has occurred. Late discovery could result in the need for significant corrective action.

That's waterfall but what is agile and how does it compare.

Well first of all agile is a collective term for a number of different development methodologies. Scrum is one of the most common but there are others including Canban and XP just to name a few. None of this is new it is only 20 years ago that a group of software developers came together to create the agile manifesto. The manifesto sets out the values that underpin agile and it's that manifesto that promotes an iterative approach to development and promotes collaboration as a more effective way of working.

It looks at individuals and interactions over processes and tools, it prioritises the production of working software over comprehensive documentation. It also promotes collaboration over contract negotiations and also responding to a change over following a plan and I suppose at their core agile development methodologies all employ an iterative approach relying on short frequent development cycles with the aim of delivering working software at the end of each cycle not at the end of the project.

It is this focus on individual development cycles that sets it apart from waterfall and it's the completion of these individual cycles that contribute to the achievement of the overall project goal or vision.

In the next section we are going to explore some issues associated with agile contracts. Now we don't have time today for a deep dive but further guidance can be found in the Digital Data Technology Playbook and also the guidance note on Contracting for Agile which has been issued by the Government.

So as touched upon earlier in an agile context development will not be against a rigid specification but rather against a set of requirements, outcomes and also a project vision. Hopefully if we start talking about contracts the customer will have outlined a number of those items already. Now they should be attached to the contract in question and it's from those documents that further collateral required the agile process can be developed for example the product backlog.

I would like to play Devil's Advocate at this point, so if pre-contract, the customer can document all of its requirements and all of the steps that provider must take in order to actually complete the development then query whether agile is actually needed because where agile is used additional risks are accepted in return for the benefits that agile brings.

For example, greater flexibility and collaboration will mean the need for greater involvement by the customer and the customer may be required to accept contractual liability in relation to its actions that form part of this process.

In line with Government guidance the contract needs to protect against vendor locking. For example by ensuring the customer owns or has appropriate rights to use IPR developed in the engagement. Also a requirement to development in a way through the agile process that does not constrain the ability of the customer to engage in future development works outside of using the incumbent provider. So for example to engage in future development itself or to utilise another third party.

Now as we have already touched upon, agile contracts and the agile process need to be and reflect the collaborative nature of the methodology. A robust agile contract should provide a framework to manage the interactions between the parties in order to achieve the project vision. In simple terms the document says how the project should run from day to day and week to week.

For example, to detail how the parties enter into and exit development cycles and the key meetings that form part of the process. This is important because the parties are not following a spec but they are following a process in carrying out its routed developments to achieve a vision.

Moving on to delivery model, so the playbook sets out and advocates a phased delivery approach utilising a discovery phase, an alpha phases, a beta phase and live phase and also where applicable a use of pilot phases. Now I would like to pick up on discovery in the context of agile. Now it is sometimes advocated that a discovery phase should be used and the logic being that if a developer has a better understanding of the work required following a discovery phase they can price more accurately and allocate resource to increase efficiency.

Now the final item on this slide is particularly important. The customer cannot be a passenger in agile project. This isn't like waterfall where the spec is drawn up and the supplier built into the spec. The customer is going to be involved throughout. The customer needs to ensure that it has appropriate resource in place to participate properly and how much resource can be committed any given time. It is important that the contract documents the roles and responsibility of key individuals. For example, on the customer side the product owner, the key individuals on the supply side which may include a scrum master.

The process the customer also needs to carefully consider which elements of the agile process it is happy to contractually commit to. There are varying approaches that can be taken but perhaps one that a customer may which to consider is to only to commit to elements of the process in which it is actively required to engage and participate when drafting or reviewing agile contracts. There will be a need for a public sector customer to ensure that the provider is committed to develop in accordance with government guidance, for example to develop in a platform-agnostic fashion and where relevant to use APIs which conform to central digital data office API technical data standards further to satisfy the requirements of the technology code of practice.

It is possible that an agile development may not yield the results required, therefore a customer should explore the inclusion of provisions that will allow it to cease works at particular points within a project. For example a customer may feel that it would be beneficial to have a right to terminate, walk away at the end of the alpha phase and in turn not incur the costs of a beta phase if the project is not going to risk-achieve the results required.

As with every software development contract there is a need for clarity in the charging model. There are a lot of if different approaches that can be taken in relation to charging. Here T&M or Time and Materials is the most common and aligns most neatly with the principles of agile, i.e. the development parts are not iterative and T&M allows costs to be incurred in relation to items that may require more or less investment and that might only be clear as the project progresses but many customers still want fixed pricing for agile. Now any form of fixed pricing is going to be hard for a supplier to accept because, unlike waterfall, development is not completed against a fixed spec and series of steps. But there are models available, for example the parties could look at fixed prices per iteration sprint or fixed prices for outcomes or perhaps a fixed price for must-haves and T&M should haves. Another area to explore is outcome based pricing, i.e. the customer will pay 'x' when the software can do 'y'. But also there are also incentive based measures, so for example to allow T&M but provide a bonus if the supplier completes on budget or to allow T&M but link the payment to completion by a defined date.

Now all of these models have their positives and challenges and each needs to be looked at on a case by case basis and due to the nature of an agile project, an agile contract is unlikely to contain a number of the tools typically used to manage provider performance in software development, for example milestones and remedies if they are missed. For this reason it is important for the agreement to contain a robust governance regime and reporting requirements that will allow the customer to have full visibility in progress and allow for interventional management as and when required. Moving on from this a clear process a clear process for resolving operational and legal disputes.

And finally, at the end of the project, knowledge transfer is extremely important. This is can help to alleviate the need for further engagement with the supplier following project completion and allow for a greater ability to work with different suppliers or solve future issue in relation to the software in-house. And it is not only at the end of the project that this is relevant, appropriate knowledge transfer points should be planned into any project.

Thanks for your time.

Jocelyn: So for anyone who has been following AI there is an awful lot going on at the moment as Alexi has said and I could roll all the way back to the 1970s and give you a history of the development of large language models and neural networks and foundation models that have led us to where we are today dealing largely with generative AI or frontier AI. But given the time we have available I don't think that's the best use of time but I think the point is that this technology has really been building for a long time. What has happened in this year and particularly since March when Chat GPT version 3.5 was released is that some of the potential of AI and the way it can be harnessed has really come to the fore and actually been capable of being used in real life commercial operational settings and that is why there has suddenly been so much buzz around it and so much focus on it and particularly also because Chat GPT is a generative AI, it is obviously not the only one out there, it is the most well known, and the benefit of generative AI is that it is so broadly applicable. It can be used in so many different ways.

There's a few examples there on the screen and the application to which you put the generative AI then has a huge differentiate in the risks that are associated with the use of AI in that particular circumstance. So using AI as office assistance to help with diary management and standard responses, the risks around that are going to be very different to if you are using it in a customer facing scenario with chatbots or if you are using at any kind of decision-making, particularly around security or diagnosing illnesses within individuals, and that is part of the challenge that comes with the power of generative AI and part of that challenge is due to the way that generative AI works.

I am no mathematician or computer scientist but I understand that the way that the actual compute works behind generative AI is there are weights and mathematical calculations and the software is generating the likelihood of the next word it needs to produce. So in the way that it scans and understands its training data, that is what it is actually doing, assigning mathematical values to each word so that it can work out in a given context what the next mostly likely word is – which is why AI can get it wrong and AI can imagine things that have never in fact happened. It is important to have a level of awareness and understanding about that is how this technology works to then get behind some of the risks and work out what we can do about them and I think both the risks and the benefit of AI are emerging as we use the technology more in real life.

There are definitely some fantastic benefits and positive stories, looking at the use of AI in medical research where some of the things it can discover and work out and work through and patterns it can see either would not be possible just using humans or would be but over the course of a very long period of time. And then – there is a lot of scaremongering – but there also are some real life stories of actual harms that have already come to pass. I think one of the best well known is from the Netherlands where the child benefit... AI was being used to assess whether were fraudulently claiming child benefit. There was an AI running behind that assessment and screening payments that were being made in applications. That AI was not very transparent, it was a black box, it wasn't possible to tell how the AI has come to the decisions that it had and people acted on those decisions without having that level of understanding and it is a very sad case because it did involve thousands of children being taken away from families and put into foster care and there was an obvious and very sad knock-on consequences for the health of some of those children and families.

So a cautionary note, I don't mean to be negative because there is lots to be excited and very positive about AI but really just bring to life the risks that we must be aware of when engaging with new technology like this and the level of understanding that's needed.

So if we have such a powerful tool with such big upsides and downsides, what is being done to legislate? There is a lot going on and that is one of the reasons why this is, it is so hard to keep up to date with. Probably the best well known is the AI Act in the EU, it has been drafted for some time and those drafts have been well publicised but it is not yet finalised, it is still under discussion, and in fact just before we were recording this there was news in the press this week about disagreement about some of the key member states about who the AI Act should actually apply to. The AI Act categorises different sorts of AI into prohibited AI, so situations in which AI cannot be used; high-risk AI to which the majority of the obligations in the Act apply; and then lower risk AI where the level of compliance required is much lower and there were disagreements among the member states about where to draw the line between that high-risk AI and low-risk AI possibly with a political overlay because a lot of the high-risk AI has emanated from the US rather than the EU.

The way that legislation is structured, it is looking at the effect of the use of the AI so it is high-risk judged by the scenarios in which it is used and the effect it could have on people and the environment, human rights, health and safety, a whole wide range of factors, so the rest of the legislation is very much about transparency and assessments, accountability and record-keeping. And even though that's not enforced yet, a lot of that language and those ideas are taking hold and we're seeing being used in other places.

In the UK we've had a really clear steer from the White Paper in March earlier this year that the UK is not going to produce a general piece of legislation like the AI Act. Instead the UK wants to take a pro innovation approach to encourage the creation of AI and the benefits that the government see will flow from that, and instead they have focused on a cross-sectoral set of principles, so putting a framework in place to enable people to build responsible AI whilst mitigating and managing potential risks and harms. So the principles in the framework are things like being safe and secure, being transparent, ensuring there is explainability of how a system works, it is not a black box, fairness, accountability and government... governance, sorry, contestability and also redress. And the government believes that these principles combined with a desire to be pro innovation and adaptable and clear and collaborative will mean that we can better respond to AI as it develops if the rate of development and change is as quick as we've seen over the last nine or ten months and that will be a more flexible and better approach overall rather than having a fixed Act which has already taken a long time to come into force and will almost inevitably fairly quickly fall behind the pace. And the UK has also focused on this sectoral approach with particular focus on sectors like automotive medical devices and financial services where they see AI having a particularly early and significant impact.

Some might think the US is behind in this conversation about regulation of AI. I think those in the US would argue that's not actually right. There is no federal legislation but like the UK there is a whole range of laws that do already apply to the way that AI works and the effect that it might have. There's a very recent executive order from Joe Biden and that has the effect of promoting safe and secure and trustworthy development and use of AI, so very similar sounding principles to the UK. The US is also strongly linked in that executive order to the NIST AI Risk Management Framework. This has been around for a short while and is a really comprehensive structure to enable you to look at AI and manage the risks that come out of it and the executive order refers heavily to that Risk Management Framework as looking to best practice around management of AI.

So you've got different approaches being taken in different countries and I could have gone on and listed other countries but I think there is it is fair to say a general global consensus around some of the key issues around use of AI and therefore it might need to be regulated or governed and those are there on the screen. It is around transparency and explainability, fairness, accountability, safety, privacy and inclusivity. We'll have a look at some more of those on this next slide because some of those are legal issues and some are more moral, ethical issues. I think it is certainly fair to say that anything around AI does, ethics does play a key part in working out is AI, use of AI in a particular scenario the right thing to do from a moral point of view. Just because there is no law that says you can't doesn't mean you shouldn't do it for moral reasons.

But if we are looking at legal issues related to AI there are a series of themes of issues that run through use of AI. As I said because generative AI can be deployed in so many different contexts and the risks are different depending on whether you are training in AI, using a pre-trained model or simply taking outputs and using them in some way, you are going to have to flex. But I think it is helpful to have these legal issues in your mind if you look at different use cases.

So the first and foremost and maybe most obviously is the intellectual property issue. There is already litigation in the US because of the way Open AI have trained their model is to effectively scrape all the information that they could off the internet, whether that was copyrighted information or not, and used it to train their models and there is now evidence that a model has remembered in effect some of its input and training material and can regurgitate it. So there's an issue that the model itself potential infringes AI and the outputs from the models can infringe intellectual property as well and a key area we're working with clients on is implementing AI policies so when their people are using AI you can set up guidelines to try and manage some of those risks.

Similarly to intellectual property – privacy. Again if you are training a model based on the entirety of data on the internet, that involves a lot of information about individuals and no-one ever foresaw that information about them on the internet might be used in this way, so there is again the issue about training the model and actually is there in EU and UK language a lawful ground for processing that information for that purpose and to that end and what are the potential risks to the rights and freedoms of individuals? And there is again the risk that the use of the model if you ask it to produce some information or an image it could have remembered something from its input data and then regurgitate information about individuals or images of individuals and what effect could that have on them.

Transparency as a legal issue. Transparency here we mean understanding that you are in fact dealing with an AI and that an AI is being used because there are plenty of scenarios if you're using online chatbots or voice-powered AIs that as a human you are actually no longer necessarily aware that you are in fact interacting with an AI. Similarly we seen it in photography competitions or painting competitions, imagery competitions, that entries are being submitted that have been generated by AI and so that lack of understanding then of the judges of those competitions, it's somewhat awkward when they then award a prize to something that has been generated by AI.

That privacy and transparency, there is a link there because clearly one of the principles around GDPR is fair and lawful use of data and being clear to people about what you are doing with their data and the effect it might have on them and it also links to fairness as well because in privacy we think about is this fair use of data, fairness is partly about how clear is it to people that you are doing this and it also links to the ethical side about fairness in a much broader sense but I think that helps to explain why a lot of thinking around managing AI and the risks comes down to doing risks assessments where you can look at these really broad big picture issues, play devil's advocate, think through what effect, what use could this AI be put to even if it was not the intended use and then use some risk-scoring frameworks and principles to working out the likelihood of that happening, the severity of that happening and then how you might try and mitigate it. I think for anyone who is a privacy practitioner there are some links and familiarities with how we would carry out DPIA and some of the thinking there around use of personal data.

You then have an issue around discrimination or inclusivity depending on how you are looking at it, so that's both issues that the training data for a model might itself not be... or it might be discriminatory in the types of the materials that it's been fed or what the AI then learns from those materials if they weren't balanced and fair materials in the first instance. There's an issue about people being able to use AI who have hearing or see... sight issues and whether they can interact on the same way as everyone else and also this issue about if you can't see how an AI has arrived at a decision then it may in fact have discriminated against someone in the course of making that decision so you need to be able to assess that to make sure that there aren't any harms being caused.

Hallucination has become a legal issue because if you are trying to rely on anything an AI is telling you and it tells you something that's not true then clearly you're going to have an issue in that material. We've already seen that happen, it happened very, very soon after Chat GPT 3.5 went live in the US a lawyer tried to use it to write his pleadings. The AI made up a case that it was referring to and of course when that went through a legal process and a judge saw it, it came to light that this case did not in fact exist and the judge made clear in no uncertain terms what he thought about the lawyer relying on AI to help write pleadings. That's a really good example of where a hallucination has happened and then being used in real life and the kind of consequences that flow from it but clearly that same issue would replicate in almost any scenario with AI, that you cannot rely on the outputs that it gives you, you still need a human there to check those facts to see that it hasn't in fact made anything up.

And then an issue of explainability. It's similar to transparency in a way but this is the point about AI being a black box or you not wanting it to be a black box. From a privacy point of view, you have to be able to explain to people how their data is being used and if something is making a decision about them you have to be able to tell them how that decision has been arrived at to see that the use of their data is fair and lawful, but that goes much broader if you look at AI in security scenarios or any kind of evaluation or in health and safety, it is critical that you can peel back the layers so that a human can see how the software has arrived at the decision or outcome that it has if you then want to be able to action that and take further steps based on it.

So what about AI in public sector? What guidance is there out there because if you start Googling and looking on the internet in this area there is endless guidance from a huge number of organisations on AI in their business or sector or market so it is actually quite overwhelming because there is almost too much guidance and it's maybe difficult to work out what the best source of guidance is and the most up to date. And that being up to date is, I think, the hardest piece. So here we have the guidelines for AI procurement that have been issued for use in the public sector. They do now feel quite dated when you look at those, they were released in 2020, and they are dealing with broad issues at quite a high level thinking about the relevance of AI to a project, so as a buying decision should we be using AI, still a good question. The fact that AI is a multi-disciplinary approach and as I think that previous slide demonstrated you've got IP issues, you've got privacy, you've got some ethics, you've got some discrimination equality type issues, so that challenge around having the right people in the room to assess and manage the risks, and then those guidelines also talk about routes to market and different ways of buying it. I think it's fair to say there are guidelines for the procurement of the AI, they don't get beneath the skin of the actual issues with using AI itself. There is actually an even earlier guide to using AI in the public sector back from 2019 and it talks about how to implement fairly, safely and ethically at a very high level and doesn't pick up on the fact you've got generative AI back in 2019/2020 we were looking at quite narrow and specific-use AI that was trained to do a very particular thing, still had some risk associated with it but not to the degree that we now have with the generative AI.

The digital data and technology playbook is much more up to date and actually has, I think, a really good section looking at AI. It focuses on what exactly is it that you are buying. Are you buying a model that is pre-trained to do a specific job? Are you buying an algorithm, i.e. software that you can then train on your own data? Or are you buying simply just a dataset that you want to use to train your own AI? So it's really helpful to think about those different aspects of exactly what you are procuring. And then it does go into some of the specific issues with buying AI like data-specific issues that we've spoken about or explainability, ethics and bias. There is some talk about the intellectual property specific issues, and really importantly the risk appetite and risk assessment. So there is an acknowledgement from sources like the playbook and regulators like the ICO that no use of AI is going to be completely without risk. What's important though, is that that as the organisation buying or using the AI you've thought about all those risks and increasingly we're talking about really big picture, wide-ranging risks, not just using the tool in the way it's intended to be used but other nefarious uses it could be put to, thinking about democratic right, health and safety, sustainability issues because these large number of models require a huge amount of power every time they are queried and prompted to do something. Democratic rights, really big picture things and looking at what is the risk and the likelihood of those occurring and therefore how can you manage and mitigate it in the context of particularly what you want to do and it's important to have that view before you go to market or certainly before you make a procurement decision on a particular product and a particular supplier to ensure that what they are providing aligns to that risk profile. It is important to have done a lot of thinking really before the market engagement and the buying happens.

And then there are other frameworks as well. So there's the data ethics framework back from 2020 which, even coming back to it now, still looks like a really helpful framework to work through ethical issues around transparency, accountability and fairness which would support some of the decisions around buying and using AI. There are others that are sector specific, so there's a buyer's guide to AI in health and care, again a little dated now back from 2020. There certainly is a significant increased focus on looking at data that an organisation already holds and how the organisation can better use that data. So this could be about internal sharing within departments within the company, it could be sharing with others in a sector, it could be creating a pool of data between multiple organisations within a sector to look at what they can do on an industry-wide basis or it could be pure commercialisation of data, putting it out there on a licensed basis for a fee because it can be of use to other organisations. I'm talking here about data in its broader sense, not just personal data but all kinds of data about the world around us. Data is an ideal asset in a sense because it is non-viable risk and non-depletable meaning if I share it with people A, B and C I've still got the same data that I started off with and it still has the same value both to me as to those others as to other users in the future.

Data in the world around is just exploding exponentially partly as a result of increasing digitalisation from which data is a by-product because if things are recorded digitally then you can extract data about them, then information and the metadata and create statistics and sometimes it works the other way round because people actually want the data and digitalise the process in order to be able to get the data. Either way you end up with an awful lot more data. The statistics I have seen say that at the moment we had around 33 zettabytes of data created in 2018 and it will be more like 175 zettabytes by the time you get to 2025 and a zettabyte has about 13 zeros after it so those are huge, huge volumes of data. And it's also significant untapped potential because reports say that about 80% of industrial data is never touched and never looked at and never used in any sense so there is a strong sense that if we actually use the data that we already have, let alone when we generate more through increasing digitalisation there are all kinds of process efficiencies even if nothing else that we could realise from that.

The public sector is ideally placed in this data conversation. It has more data about citizens than anyone else, it can be more joined up than anyone else and when you think about using data to realise a public benefit, government departments are in the strongest position to do that and realise that to create efficiencies for public services, create better processes and flows for people to engage with and to better inform policy in government. The national data strategy from 2020 recognises the power of sharing of data in terms of efficiency and value and public benefit but it's really taken the pandemic to be a catalyst to really bring some of that into sharp relief and to show us what can be achieved when industry and government and local authorities and communities better share data to help us arrive at solutions as to how we contain and treat and live in a world where there was a threat like Covid and that kind of user case, somewhat unique we hope, but really demonstrates what happens when data can move more freely between organisations to result in a public benefit for all of us. So the risk that if organisation put their arms around data and treat it in silos that we won't realise any of those benefits.

In the national data strategy which I said was in 2020 contains this infographic showing how the government believes that data can be better used and what needs to happen to better use it across the economy and across society and you can see that mission three there from the data strategy talks about transforming governments' own use of data to drive efficiencies and improve public services.

What the national data strategy goes on to say is that you need a whole government approach to data led by a government chief data officer and in strong partnership with organisations from the private sector around them, so this is about transforming the way that data is collected and stored and analysed and shared and use across government, cross-sector so that you create a joined up and interoperable infrastructure and that will result in more efficiencies in public services, realising better value from what we already have and hopefully creating new services as well to better serve citizens and individuals.

Using data though isn't something that's a simple by-product. Organisations clearly are set up with a particular use case, operation or product or service in mind, they are not set up to marshal and understand and use data. So whilst data is by-product of lots of other things, using it in the right way requires thinking about and different skills and processes to what an organisation might do as part of its business as usual.

In order to share data, you really need to think about each of the elements in this wheel. First of all you do need to have quality data in the first instance. It really is a case of rubbish in, rubbish out. So there's a significant exercise that organisations should not underestimate the amount of time and effort and money that goes into making sure data is cleansed or retains its integrity, it's all in the right format both in terms of things like data value and currency formats as well as formats that can be shared and also that it's accessible, it's in the right place at the right time and that the right people have the access to be able to get at the data that they need in order to do their job.

And that quite neatly bleeds into having the right systems in place if you're going to be dealing with data, you ideally want a single source of truth, a single repository, a data lake or a data warehouse where all the data resides, that you don't have conflicts between what should be the same data value but actually it's got different values in different repositories, to make sure that data is properly structured, that it retains its integrity and you have the right tools in order to be able to analyse and classify and use data in the different scenarios. So you'd have the right pools and systems but also standards around data classification as I've just mentioned, both within systems but also understanding how do I classify data, which data gets which classification, policies around retention, policies around security, policies around what data can you use in what way, thinking about some of the ethical issues that we've covered in some of our other sessions, all that needs to be built around the data to support it within the systems.

You have to get the data into the systems as well and partly that is about culture within an organisation, a culture that recognises data as being important, recognising the value that you can get from data when it's used in the right way and making sure that where the systems that you're engaging with require data, that that data is inputted in the right way and that it's all treated so that it ends up in the right place in the right system. And that culture would normally be driven by leadership, someone at a senior level within an organisation that has a clear vision of what they want to achieve either using or generating data within an organisation because you have to know what you want to achieve in order to set up the rest of the business and the processes and the skills of the people that you need to achieve that aim and because data is multi-faceted and multidisciplinary you're thinking about legal issues, you're thinking about ethical issues, there's a very failure technical side to it, there's also a commercial element as well, all this storage and skills and people doesn't come for free, you've got to have the right leader in place that can understand all of that and bring it all together. That also has to be a leader of a team of people who have the right skills, the right data scientists to be able to interrogate and analyse and use the data. And that leadership then rolls up into accountability, so having the right governance around the data which is partly the policies and the standards that we were talking about earlier, but also the legal aspects, complying with laws, making sure that you comply with confidentiality obligations, privacy obligations, thinking about IP, and keeping the record, and keeping, and really importantly, the risk assessments that we were talking about in the AI session or data protection impact assessments to show that your use of data is both ethical and legal.

And that then neatly rolls up into the ethical piece. Increasingly we're having a conversation about how using shared data you can't do that without talking about ethics. There are lots of laws that apply to data in different ways but there's not a lot of law that says what you can and can't fundamentally do with data, so ethics really steps in here to fill the gap because often just because you could do something doesn't mean that you should do it from a greater public benefit ethical point of view. So having someone in the organisation with the skills and the language and the frameworks around ethics is also increasingly important.

So you've got all that set up around your systems to enable you to share data, now let's look at some of the legal structures that you might use to share data. But first thing I want to do is myth bust. I often hear people say, well that's my data, I own that data. I'm afraid though no-one owns data, there is no legal concept of ownership of data. Data is not a tangible asset, it's not like a house or a car or money, it's not even like intellectual property right, no-one can own data itself. There has been case law around theft of data and that has reinforced that view. What you do have though is rights over data and those rights come from different areas of law. You have things like confidentiality rights in common law and confidentiality rights in contract, you've got privacy where you think about who is the controller of the data and what can and can't they do with that particular piece of data. And then largely you have contractual restrictions, which means that when you are dealing with data, it is really fundamentally important to look at the contractual permissions and restrictions around use of data because that will determine to a very large degree what you can and can't do with that data. So once you've established what you can and can't do you then have to think about well, how am I going to actually share that data, what legal structures are we going to use, and there is a range here. So we could think, and particularly in the public sector about using an open data sharing. I say open data, I mean the fact that it is accessible and available and permitted to be freely used, the idea being that government has access to data on all aspects of society and it will encourage innovation if those are made publicly available. Clearly you still have all the issues about data being in the right format and structured correctly and in a shareable system and language that could be used by other systems, there is still cost involved in having open data and open data is still shared on some kind of licence basis to make clear that it is in fact open or if there are any conditions that go with use of the data like publishing notices for example, that is not your source of the data.

You could have a corporate structure, so there are some specific structures that people talk about in the context of data sharing. You have things like data trusts, data commons, data cooperatives or just your normal joint ventures like we are very familiar. A lot of those other structures are not spoken about in terms of science when someone says data trust, some people can mean a trust structure whether you have a trustee or a fiduciary duty, others just mean generally the concept of stewarding data and looking after it for some kind of public benefit or wider benefit than just the personal organisation holding the data. There are some examples in real life of each of those data trust, commons and cooperatives, not so widely used yet and not so widely tested and still fundamentally if you've got a corporate holding the data you're still going to need licences to move the data in and out of that corporate.

A really interesting one from a legal point of view is actually the role that statute plays in this area to encourage, facilitate and be clear that data sharing is permitted in some instances. There's actually more examples than people may at first think, if you think about schemes like open banking, the Pensions Dashboards Regulations, the market-wide half hourly settlement schemes, the Retail Energy Code, the digital verification and identity scheme that DCMS have a framework for, the National Underground Asset Register, all of those are implemented and have been facilitated or initiated by statute that says there is a public benefit in allowing individuals to say I want to make my banking details open so that you then have other companies come along and innovating off the back of that to provide a better service to individuals, or in saying if we had a national register of all the assets that are under the ground, that would reduce timing for everyone who wants to put other assets into the ground, having to get those plans, there is a significant saving to industry there, it will avoid accidents like people chopping through other people's assets that are already in the ground and then causing operational issues to tenants in buildings etc.

So it is really interesting that some of the largest data sharing schemes that we see in this country have actually been initiated by statute giving everyone the comfort that actually using data and sharing data in this way is not only permitted but it is the right thing to do and the risks around it have already been assessed and managed and structures put in place to enable it.

For a lot of other data sharing, you're looking at some kind of contractual structure licences and also increasingly seeing the emergence of data marketplaces. Those again can take various forms, it could be a simple as a noticeboard, so an index of data that is available from a range of sources, it could be a more sophisticated marketplace where the marketplace in the middle adds value by facilitating the exchange of that data, so the data will be contracted between a source and a recipient, and the marketplace in the middle could have the pipes to enable data to be sent securely from one to another or you could have a marketplace that is more like a distributor or a reseller where the marketplace actually takes some contractual risk and is involved in the contractual chain from source to recipient. We're seeing organisations trying to streamline that even further by coming up with standardised licence terms that's very easy for both source and recipient to select the kind of terms that they need or that they want to reduce negotiation of the licence agreements in the middle.

Thanks very much. That's everything in our brief round-up of looking at data as part of digitalisation projects undertaken in the public sector.

Alexi Markham: That wraps up the final of our six sessions on digitalisation within the public sector. Hopefully it's been helpful and it's given you an overview of some of the issues that you're likely to encounter in the legal sphere when you go along your digitalisation programmes. Obviously feel free to reach out to us at any point of the stage to talk about any of the issues that we might have talked about over the series or equally other issues crossing your desk. Indeed if there are other topics that you would like us to cover in future sessions or you'd like us to go into in any more detail, then please do let us know as that feedback is really helpful for us.

Thanks for tuning in.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.