There’s no doubt that it’s super trendy to say you have adopted Agile and work with Agile project delivery methods. Certainly in our world of software development, it’s expected this will be a way of working and we wholeheartedly agree it can have huge benefits to your organisation when done right but Agile was never designed to work without structure.
With a general feeling, and from hearing some rumblings of distrust of agile, we commissioned an independent research report exploring Agile Software Development and the results are in.
Although widely used and still regarded fairly well – there is a feeling that it’s not all it’s cracked up to be, with 74% of those asked stating that they had experienced a negative impact of adopting agile delivery methods. The most common issues being around difficulties in predicting timelines and costs.
Why is this?
One theory is that agile can create small silos of activity and with one person working on one area, the big picture is not always taken into account. i.e. – what impact does that piece of work have on the overall project? It’s all well and good having one part complete if something that will be part of a future build will impact what has just been done. One analogy we’ve heard here at Pulsion (and our owner likes an analogy) is that of a car that needs an accelerator pedal. Someone could go off and build that with no indication of the full picture that this pedal can’t be so big it takes up the full space. Then picture the scene two weeks later when it’s realised that a brake was also needed. Oh dear – that accelerator pedal is too big and there’s no room for the brake. The result? Rework. The first pedal built will need to be rebuilt smaller to make room for the second requirement. Rework makes it impossible to predict timelines and costs which is what the results of our report have pointed to.
Ok – that’s an extreme example, but you get our point. Our research revealed that 61% of Public Sector organisations believe that predicting timelines is more difficult and 50% of mid size companies have seen projects over-run.
Agile can’t work with people working in isolation. It can only work if the full end-result has been fully specced out, outcomes agreed and then broken out into the tasks required to deliver and with related tasks identified. The analogy we give, albeit extreme does highlight the issue of rework and the result would appear to support that with 45% of mid-large sized companies stating there has been a slump in morale within their development teams since adopting agile. While in the Public Sector this has led to a decline in inter departmental relationships with 44% stating this was the case.
It doesn’t have to be like this. Agile may have it’s detractors but done right it can be a highly effective tool in project delivery.
Our blog in the coming weeks will focus on more results from the report with our next focus on the impact agile has had on documentation.
When concerns over cloud migration are discussed, it invariably brings up the issue of security and concerns over “will my data be safe out there somewhere and not totally under my control”. This was raised as one of the reasons people delay moving their business operations to the cloud during our recent Pulsion Talks series.
The purpose of this blog is to alleviate some of those concerns by looking at some real-world examples to illustrate why I believe the cloud is secure. Yes, there will always be risks but equally you could argue the risks are greater if you stick with on-premise solutions due to a misconception over cloud security. There is no argument over the benefits that can be gained by moving business operations to the cloud such as lower costs, flexibility, software updates, ability to work and access data from anywhere or on any device and the opportunity for increased collaboration with colleagues, particularly when the workforce is spread across multiple locations.
However, stacked against those benefits are some of the biggest concerns we hear on cloud migration such as loss of data, accessibility and cyber-attacks. All genuine concerns and reasons to look for more information to validate that the cloud is right for your business.
Can the benefits outweigh those concerns?
Let’s see through some practical examples if I can try to address the concerns and prove that when it comes to security, an on-premise solution will not be able to compete with large cloud providers such as Amazon, Microsoft and Google.
The UK Government 2019 report on Cyber Security Breaches points to a decline in security breaches, stating that approximately one third of UK businesses reported a breach of some kind. That may still seem a high number but the report goes on to explain that this is the lowest number they have reported. Indeed, the figure has reduced by 11% since the 2018 report. The reason put forward for this decline is that as awareness grows, companies are becoming more aware of the threats they are open to but very interestingly, one of the measures put forward as a reason for this decline is an increase in businesses migrating their data to the cloud rather than maintaining their on-premise solutions. That would indicate that the message is starting to develop that cloud solutions might be more secure than on-premise.
Another perfect example of a growing recognition that cloud is more secure is the US Federal Government plans to ramp up their cloud migration projects for their internal departments, putting cloud adoption at the heart of their IT Modernisation Strategy. This included moving the Department of Homeland Security to a cloud environment. Personally, I think if the Department of Homeland Security are happy to keep sensitive information on the cloud then maybe security shouldn’t be a huge concern or blocker to making the move.
One point to note is that cloud migration in no way means giving up responsibility for you own company data. For that reason, Amazon Web Services (AWS) deploy a Shared Responsibility Model which means:
- Customers are responsible for choosing how their data is handled IN the cloud; and
- AWS is responsible for the security OF the cloud
This means that customers decide how data is managed through choices on things like client or server-side encryption, platforms, operating systems and accessibility and AWS look after the security of the server on which the data is held. This shared responsibility model maintains a level of ownership for the data itself and how it is handled. This ensures anyone with a reluctance to give up full ownership can rest assured that they will be involved and make decisions on how that data is handled and who has access. Equally, anyone who thinks moving to the cloud devolves them of responsibility is wrong, there are still security measures organisations need to ensure are in place and that is what makes the AWS shared responsibility model a good example of ownership and accountability for security.
AWS places security at the heart of every offering to help you fully realise the speed and agility of the cloud
Despite more and more evidence to suggest that cloud is the best option, there are still reports which throw that debate wide open. Take for example, this article from BBC News in October 2019 which examines the issues around bank account accessibility and IT issues with banks as they reach unacceptable levels. One thing pointed to in this article is a concern over the increase in use of:
Third party providers of cloud services for computing power and data storage. The consequences of a major operational incident at a large cloud service provider, such as Microsoft, Google or Amazon, could be significant. There is, therefore, a considerable case for the regulation of these cloud service providers to ensure high standards of operational resilience
From this report, “cloud services stood out as such a source of systemic risk for the financial system”
While I understand the concerns raised in the article and the report, is it fair to highlight point the finger at the main service providers? It doesn’t state that these cloud service providers were at fault for the issues – it states that there is a concern moving forward of the risk. I would ask – is that risk real or perceived out of the sensitivity of this being about people’s money and rightly so, who wouldn’t be concerned about that. But who is at fault here? The banks themselves or cloud providers? The report doesn’t address that and if the issues have occurred due to internal system failure then surely that makes the case for cloud providers being more involved even greater. Regardless of who is at fault, there is no doubt that the two need to work together to improve current systems and ensure these issues are reduced to an absolute minimum to alleviate public concern. Don’t get me wrong, I’m not dismissing these concerns to help my argument and if I didn’t have access to my own money due to an IT issue with my bank, I would be as angry as anyone else but in spite of the findings of the report, I would still make an argument for cloud over internal systems – even for banks.
One final real world example is from an email verification tool company (yes, even in the world of GDPR they still exist). Verifications.io suffered a major data breach in early 2019 which resulted in the details of over 800 million people becoming widely available to anyone with the knowledge to access the database – it was wide open with no encryption or security. Articles around this state that the data was held across 4 databases, all located on one server. It doesn’t state where the server was held but I can guarantee that none of the big cloud service providers had any involvement in this or we would have known all about it. As soon as the breach was highlighted the data was removed and the verifications.io website quickly followed but by then it was too late. They did argue that much of this data was already in the public domain such as email addresses and social media accounts but that’s not much of an excuse when the numbers show the huge amount of data being readily available due to one company. A search today would show that the domain for their website is for sale, pointing to the fact that a lack of security has far reaching consequences for those responsible whom it would appear are now out of business.
That to me is the ultimate argument in favour of the cloud. It is in the interests of the large service providers to ensure your data is secure – they can’t afford the publicity if it goes wrong and for that reason they employ an army of experts to ensure risk is mitigated to as low as reasonably practical. No on-premise solution could ever meet that level of security, organisations simply don’t have the manpower or budget to maintain systems to the same level.
Nothing in life or business is without risk and a cloud solution will continue to have it’s detractors. My opinion though, is that cloud is the better option over on-premise. However, you need to consider what is best for you and weigh up the pros and cons of each to ultimately decide what you believe to be the best solution for your business.
Wherever you are in your cloud journey, speak to us and we will have an informal discussion to offer advice on the best way forward for you.
What is Digital Transformation and what exactly does it mean for you. Is this something you should be considering or is it just the latest industry fad and buzzword that everyone is claiming to be an expert in?
Wikipedia define Digital Transformation as:
the use of new, fast and frequently changing digital technology to solve problems
Is that completely helpful in terms of what it means for business?
Let’s break it down – two words in the phrase digital and transformation. Digital implies you’re going to do something with technology, transformation implies change. So is digital transformation simply a technological change? Slightly but there is more to it than that. Let me focus on the transformation element first.
Transformation is a change and change in business almost always comes from a driving need due to something failing or not working as effectively as it should. That could be an operational change or a change of process. The use of technology is what facilitates the change – to improve ways of working. So, the starting point shouldn’t be “I’m going to implement digital transformation in my business” – the starting point is, “something isn’t working as effectively as it could, how do we improve it”. Step one is to set out what isn’t working and what you want to achieve. Only after that is established should you look at technology and how it can help you reach the outcome you want.
By that I mean – digital transformation isn’t saying “I want to implement some form of technology” – it’s recognising that something fundamentally needs to change in process and in mindset and technology is the facilitator to reach that outcome.
As an example, let’s look at cloud migration.
Again, a term that’s been kicked around a lot and one many organisations have successfully implemented within their business. To apply what I said at the start of this blog, what’s the driving need for change in a business that would get them to the outcome that their business needs to move to the cloud? (and because everyone else is doing it isn’t the right answer!!!)
Issues that could drive a need for cloud migration include outdated manual processes, expensive outdated server architecture, consolidating data centres, flexibility of workforce location, responsiveness to customers, security, efficiency improvements. This list is not exhaustive but it points to one thing – updating ways of working from outdated and often expensive processes. The outcome of which is modernising business operations to become more competitive as a supplier and more attractive as an employer.
As a practical example of improving business productivity, AWS reports that their customers state that their workforce are 30-70% more productive as a result of migrating to the cloud.
In summary, yes Digital Business Transformation is the key term used today around technology and isn’t going away anytime soon – but the fundamental shift must be to recognise what the end goal is and then work out what the technological facilitator is. That is the key to making digital transformation a success.
Today, in a rainy Las Vegas, we got to find out more on some other new services released at AWS Re:Invent.
One particularly interesting offering is Amazon Kendra, an enterprise search service powered by machine learning. The service indexes all sorts of information such as files, web sites, Sharepoint, databases, etc. and allows the user to search using natural language.
A demo of the system showed a search containing the phrase “Where is the IT Helpdesk”. A normal enterprise search system returns various keyword matches but does not understand that we are looking for the location of the IT Helpdesk. With the same search undertaken on Amazon Kendra it searched some company documents and return some text outlining that the IT Helpdesk was on the 1st floor of the demo company. The machine learning capabilities seem to provide some significant improvements in returning search result which are more relevant to the user. It will be an interesting technology to investigate further.
Another interesting service also introduced and again using machine learning was Amazon Fraud Detector. This service simplifies detecting fraud in transactions. The user provides email addresses, IP addresses, and other historical transaction and account registration data, along with other indicators of fraudulent/non-fraudulent transactions. A machine learning model, trained on this historic data, is then fed new transactions and indicate whether the new transaction is fraudulent or not. The technology is built around the same concepts used by Amazon themselves in their traditional e-commerce business.
Again with Amazon Fraud Detector we’re seeing the packaging of complex machine learning applications into easier to use, packaged services.
Finally another new service which is slightly longer term in it’s ambition is Amazon Braket. This service allows quantum computing algorithms to be programmed and simulated on the cloud. We’re beginning to see quantum computing making progress and services like Braket allow interested parties to get some experience and prepare for this new computing paradigm.
A three hour keynote speech to kick-off day 2 of AWS re:Invent might sound long but in reality felt very short indeed. Day 2 started with the keynote speech from AWS CEO, Andy Jassy. and as you would expect, a large number of new services were announced. For us, as an AWS partner who develops server-less cloud and machine learning solutions, there were a couple of standouts.
A new machine learning service was introduced for performing code reviews on AWS code – Code Guru. Code reviews are always good practice but waiting for developers to perform code reviews or the availability of code reviewers, within an organisation, can cause delays in finishing and deploying code. CodeGuru can integrate into the continuous integration/continuous delivery (CI/CD) pipeline and perform automated code reviews to what seems like a fairly deep level including identifying style, performance resource leaks and concurrency issues. It currently supports Java with other languages available ‘soon’.
Several new applications were announced around Sagemaker, Amazon’s service to allow the building of machine learning (ML) models in the cloud. The stand-out for us of these was Sagemaker Studio, an IDE to make it easier for developers and data scientists to build, train, debug, deploy and monitor machine learning. This certainly fills a gap in Amazon’s ML offering and the demos of the product looked like a major step forward in productivity for ML model developers.
Machine Learning Summit
In the afternoon we attended the Amazon Machine Learning Summit where various organisations outlined how these used machine learning. We have always been of the opinion that machine learning/artificial intelligence can be applied to practically any organisation and the diversity of applications presented at the summit supported that view.
The first speaker, from Fred Hutch Cancer Research Center, announced that he was of the firm belief that cancer would be ‘cured within our lifetime’ and ML/AI would be at the centre of making this possible. He then described how machine learning was being used to analyse and categorise the correct t-cells (cells which attack foreign bodies in the body) to boost those t-cells to attack specific tumours. He outlined how ML research and new drugs had drastically improved the outcomes for patients surviving after two years with the disease. Thanks to the work undertaken outcomes had improved to 50% with hopes for a higher survival rate with further research.
Another speaker discussed machine learning in fish farming to detect sea lice, weigh fish and decide feed quantities using cameras and AI. Other subjects covered were improving vertical farm efficiency to solve the world’s looming food crisis, AI within the Internet of Things (IoT), using AI to combat deep fakes and bias in AI.
The overall takeaway from today’s sessions is the sheer amount of work being put into artificial intelligence and how the technology is being applied across a huge range of industries to improve outcomes be it healthier humans, healthier fish or healthier crops.
Looking forward to what day 3 has in store.