Secure software delivery models

Jeff Shepherd
6 min readJul 1, 2021

Security is no longer a Non Functional Requirement (NFR) tagged on the end of a software project a few weeks before Go Live. If you treat security in this way, you end up with ‘point-in-time’ security and then a ever-decreasing level of security quality throughout the product’s lifecycle. New vulnerabilities emerge that put a hole in the side of your ship, and if you don’t patch them, the ship sinks, your customers suffer and ultimately your business suffers. There are multiple strategies and implementations currently being implemented across the industry to mitigate continuous security risk but there is no silver bullet. Hiring, or training, people with security skills is part of every plan but there are different ways of doing that. Below I describe and compare some different operating models, for technology leaders and developers alike, to consider if delivering secure software matters to you.

Note: Security Operations Centres (SOCs), looking at the wider business cyber risk, are out-of-scope in these model. These experts should be consulted by software delivery teams as part of a ‘just-enough’ governance and assurance process but the models here are more about finer-grained security controls and much more frequent engagement with, and by, developers.

Centralised Security Model

Centralised team delivering a service to other internal teams

A central pool of security talent, with skills and experience in software development. This team can relate to the challenges of day-to-day software delivery and work to provide automated processes and tooling to support delivery teams in adopting secure practice easily and quickly. The advantages are a dedicated team to lower the barrier to security adoption for the wider community. However, the pitfalls around working on tooling that doesn’t provide value, or is too complex to use, would lead to a considerable amount of wasted effort and money.

This setup requires strong product ownership to mitigate the pitfalls, which might not be immediately apparent. Additionally, this model requires more capital expenditure, w.r.t hiring people, which your organisation may or may not be in a position to invest.

Decentralised Security Model

Embedded security expert in dev team

Two options that fit here are an embedded security expert in each dev team (or maybe on the most critical services) or split time for a single security expert across a couple of teams (akin to 1 person per Tribe if you subscribe to the popular Spotify model of development team structure).

Security expert shared between multiple teams

This enables slightly more focus from the security-side on aspects that affect the team directly and enables natural team engagement without the product ownership of the centralised model. Whilst this looks like ‘team ownership of security’ on paper, in practice it could create a culture of ‘throwing security over the wall to the designated security developer’ which would be counter-productive and not realise all the benefits of this setup. To mitigate this, the team lead would need to set clear expectations for developers to include security in their way of working and be responsible for security as an aspect of delivery. The security expert present provides consultation and automated tooling as the team needs.

This model also involves capital expense and depending on the size of your organisation could be more than the centralised model. You also introduce the possibility of the extra team capacity being utilised for something else in the future, depending on who owns and decides what roles the team needs. This option can be considered as an interim step maybe, which also lends itself to the use of contractors over permanent staff to fill this role, which might align to the skills injection desired.

T-shaped Developer Model

Individual security capability increased in dev team

Conversations about T-shaped developers go hand-in-hand with a ‘shift-left’ culture (you know the one where dev teams want autonomy…and everyone agrees it could be a cheaper, better and faster option, until rubber-hits-the-road and the team have a ton of extra work to do and people are left surprised by the impact on delivery…written with realism, not cynicism, in mind!).

This approach focuses on increasing the security competency of each developer in the team, to improve the security posture of the software delivered over time. There is less capital expenditure here and return on investment starts to come through later in time than other models but this is about long term investment in people and the benefits that come through individual accountability, supported through continuous professional development.

Organisations might need to find security conscious developers, who exist in the business already, and encourage them to spread the good word and build a community from the ground up. There should also be provision for formal training material (e.g online learning) and the time to conduct that training in the normal working week (we aren’t asking for people to upskill out of work time here!). There should also be opportunity to learn-through-doing. This might take the form of team sessions, for example hacking purposefully vulnerable web applications, or taking time to conduct a Threat Modelling workshop, or reviewing existing source code and test coverage and adding cases for input validation, gaps in permission boundaries or encryption improvements.

Whilst I believe a reasonable approach to ensuring long term success in this model relies upon security forming part of individual personal objectives (assuming performance management is a thing in your organisation, irrelevant of it’s formality and regularity) it needs to be balanced against the expectations of individuals as a whole. For example, it’s unreasonable to expect security to be at the forefront of every developers’ mind after a couple of months. However, it’s not unreasonable for the team as a whole to have implemented some additional security controls after 6 months (for example, encryption at rest everywhere). It’s also not unreasonable to expect individuals in the team to be suggesting security controls that could be implemented after 12 months of introducing the objective.

If you were to follow this model through, you may reach a point-in-time where the existing codebase has been updated to a baseline level of security (most likely via some explicit technical direction setting) and the team are then moving to a position where they autonomously suggesting improvements, prototyping them, putting them into production and demonstrating iteration towards specific security goals. As a Technical Lead, I believe the effort required to guide and support the team to get to that place is infinitely worthwhile, both for the personal development of individuals along the way and the improved quality in customer service delivery.

Final, final, point: if you’re a developer, looking to increase your individual contribution, impact and value in a professional environment, I’d recommend putting some time and energy into practical applications of secure development. Some things to consider, learn and implement are:

  • Encryption in transit: light reading on encryption that might be applicable in your current, or desired, domain. Try out some in-transit encryption (e.g RSA encryption is common and a good foundation)
  • Encryption at rest: do your homework on the Cloud providers options for encrypting data at rest (e.g AES)
  • Detecting vulnerabilities in third party software: a common attack vector in today’s world. There are tools for each popular programming language,so find them and try them out.
  • Configuration review: tighten up the permissions on system components (principle of least privilege), only open necessary ports, consider who can access system logs (and what you are logging out!)
  • Unit testing: review your unit tests (you have them right!) and include input validation checks (e.g file naming conventions, file extensions, zip contents, whitelist of headers, user input, special characters), auth and auth (e.g JWT verification, user group membership validation), SQL injection protection (if not covered by chosen framework)
  • Integration testing: automated API tests to verify information exposure, HTTP header flags, authentication method validation, rate limits and usage plans
  • End to end testing: deploy automated scanning tools (try OWASP ZAP and the Top Ten) in a continuous integration environment and act on findings (there are options for your APIs too, not just web apps).
  • Manual testing: you don’t have to be a penetration tester by trade to have a crack at breaking a system. You do however need permission for the system owner (and potentially other internal teams or external partners and IaaS providers).You can develop your skills against specific websites (OWASP Juice Shop) or sign up to bug bounty programs.

--

--

Jeff Shepherd

Technology | Science | All views are my own | He/Him