Monday, 8 May 2017

REST Web Service API Guidelines

When building web services, one of the primary benefits of using REST over SOAP is the intuitive nature of service interfaces.  However, this simplicity of interface can very easily be eroded.  Below are some suggested (and hence flame proof) guidelines that could be followed to ensure continued interface simplicity:

Resource-Oriented
  • REST is resource-oriented, not service-oriented.  Resources are nouns, not verbs.

Addressable
  • Every resource must be addressable by means of at least one URI (name).  Names must be meaningful.
  • Clients cannot access resources directly - they deal in representations of that resource (e.g, XML, JSON, ...)  
  • Resource representations would ideally be addressable (so that they can be passed around as URIs, eg. /rest/bookmarks.xml)  The use of HTTP accept headers to specify representation is however acceptable, but should be provided as well as the addressable URI.

Share:

Sunday, 7 May 2017

The Importance Of Simplicity

I remember way back when, a number of colleagues engrossed in the latest Obfuscated C++ challenge, an exercise in making code so concise, so cryptic, that even the hardened coder paled.

This always struck me as an odd thing to want to do.  Why make things more complicated than they have to be?  Are code simplicity and readability not the important thing?  After all, software only spends a small fraction of its life being created.  The rest of the time it is being maintained, and usually by different people who are not going to appreciate your obscure brilliance.

Keep it simple, smart-arse.
Share:

Generalising Specialists

There is this rumour going around that good architects need to write (production) code.  I think the term was coined by Scott Ambler of agilemodeling.com, or was it ThoughtWorks?  Anyway, I don't care, because I disagree, quite a lot.

See, I did write code, a lot of it.  I was good at it, and I still like to write code for my own pleasure, but my current, full-time role as an enterprise architect does not involve much coding.  As a result, I have become slightly detached from the way code is written in our company (tools, processes, etc.) and when I do, it is laborious and frustrating. In fact, I would respectfully suggest that letting me near production code might: a) not be the best use of my time, b) be dangerous.

It's not because I'm older, or wiser, or slower, or too important to code.  It's because I spend my time thinking about different abstractions: higher-level ones. The principles are the same, but the moving parts are larger.

See, I write enterprise code.

My IDE is Powerpoint ... or Word, if I'm feeling dangerous.

I am like the town planner.  Sure, I could have a go at building a house, but would you really want to live in it?

I know I wouldn't.
Share:

Black Ops Projects

I think good project portfolio management is an essential discipline for software development.

You have to know what you want to do, why you want to do it (in terms of measurable benefits), how much it is likely to cost, know who will do it, know when it will be done, how much it ended up costing, and how much benefit actually was realised.

Then you have a defined pipeline of work, and software developers wake up in the morning knowing what they are supposed to be doing.

Enter the villain.

Our villain has a vague, but very senior role.  He is responsible for improving something or the other, and is brimming with vision, personality and the gift of the gab.  He has no time for pipelines.  His ideas are hot, relevant and need to be done now!  (Before he forgets them or moves on to the next grand scheme)

Our villain likes to pick on vulnerable teams, all smiles, and uses his charm and lofty position to lure or bully them into his cause.  He gets things done. Under the radar.

He is dangerous.

He needs to be taken out.
Share:

Illusory Comforts

I like process.

I appreciate that this is probably because it's in my nature to like order and predictability, but good process adds value, can often be automated, and frees up developers to do what they do best: create excellent software.

Bad process on the other hand is a killer.

It's a killer because it slows everything down.

It's a killer because it doesn't work, and is often bureaucratic.

But mostly it's a killer because it gives those in charge the illusion of progress.

I worked with a place once that outsourced their software development capability to an outfit that had CMM level 5 accreditation, and could prove it ... but developed the worst software I had ever encountered.  Yet senior management were blissfully unaware of the fact, or perhaps they were and didn't care.  They had achieved their outsourcing objectives, reduced resource costs, and achieved CMM level 5 in the process!  I bet bonuses were good that year.

A bad leader does not trust his team to do their job.  He tries to measure them, to enforce success via metrics and bad process.  It offers him an illusory comfort of progress to see the hours worked and lines of code per day metric increasing, but damn those inconvenient holidays!

A good leader understands that his team consists of people, not resources. Humans who need autonomy, mastery and purpose. Motivated individuals who will take pride in their work, and do their best to get the job done well.

And that's all you can ask of them, really.
Share:

Saturday, 6 May 2017

Correctly Assessing Security Risk

I found a security issue some time back, the logging of a long-lived security session token.  This token enabled me to access an internal client data service which only checked the validity of the token, not its right to perform the operation on the data I was accessing.  We had two types of tokens, managed by the same session token management service, one for clients (the type of token I had found in the logs), and one for trusted internal service accounts.  The client service did not care about the difference, and as a consequence I was able to access all client data with my token.

I raised a critical incident, but surprisingly, nobody raised an eyebrow, not even InfoSec.  In fact, I was asked why I (little old me) was raising a critical incident.  After all, was there a crisis?  The incident was closed immediately and I had to instead raise a work item against the relevant team, who only fixed it some weeks later - there were more pressing delivery deadlines to consider.

Then, a week later, I found that some system was logging client passwords when they were changed. This was clearly a critical incident, but I thought twice about raising it.  But I did, and boy did that stir the pond!  InfoSec were all over it, sending out a flurry of messages to senior management, and the issue was fixed that day.

Yet both incidents were critical.  Had I not used my trusty OWASP risk rating methodology spreadsheet to come up with an objective risk assessment for both?  You see, the ability to leak all our customer data is just bad as the ability to log in to a few accounts and cause mayhem.  We had strict money laundering controls, so financial theft was not the main threat, brand reputation damage was.

So what the hell was going on?

I think the issue is that of understanding security risk.  Most people can relate to a password breach, but the other breach was too technical for the average punter.  And there was the rub.

I think the answer to this is to establish rules like: if the risk rating is determined objectively to be critical, then it is critical, not: do I think it is critical?  Granted, some of the inputs into the OWASP risk rating model are subjective, but they are less technical, and thus harder to get completely wrong.

It has been suggested to me that the business needs to accept the risk, but are they really qualified to do so?  Surely it is InfoSec who, on behalf of the business it serves to protect, has to define and enforce SLAs for security risks?



Share:

Friday, 5 May 2017

Uniform Service Authentication and Authorisation

The problem with using short-lived access tokens to enable authentication and authorisation of service requests, is that they only really work for synchronous interactions which themselves are short-lived.  Access tokens in, potentially long-lived, asynchronous messages run the risk of expiring.

An interesting alternative is to use certificates to sign (and optionally encrypt) messages.  For example, service A sends a message via some number of queues to service B.  Service A has a private certificate which it uses to sign the message.  Service B receives the message, and uses the corresponding public certificate to validate the message signature, and since this would only work with Service A's public key, service B knows the message originated from service A, and can authorise the request accordingly.    This would work equally well for synchronous calls.

Of course, this means a lot of certificates deployed to a lot of places, but certificates could be obtained from a central service, and access control applied.

So service identity is sorted, but would about user identity?

I suggest that users and core services need to be separated by a gateway layer (application or service), which would be responsible for establishing user identity via some means (basic auth, OAuth, Kerberos, or whatever), and then sending a signed user identity assertion (along with any roles it might have) along with requests and messages, much like is done in SAML.  Downstream services would trust the gateway to appropriately establish this identity, and trust would be ensured by means of the above certificate mechanism.


If you're looking for an identity & access management system, then check out OpenAM.  ForgeRock are doing an excellent job of supporting it, and the major releases are open sourced and free to use. We use it at IG for SAML, OAuth, Windows SSO (Kerberos), LDAP authentication amongst other things.  It is very easy to configure, either manually or scripted (e.g. via Puppet), and provides us a resilient, scaleable, standard authentication capability.  It does also provide policy based access management, but we've not ventured there yet.

And there it is.

Share:

I Hate Documentation!

For a software architect, and someone who tinkers with words, this is perhaps an odd thing to say, but it is true: I hate documentation.

I hate it because it is laborious, very often serves no purpose other than to have been produced, and rapidly gets out of date.  I have followed methodologies where every model under the sun is produced, code is generated, and round-trip engineering is attempted.  What were we thinking?

But documentation can add value.  Code is a very low-level, cumbersome way to get to grips with a system.  A few high-level component or sequence diagrams can add a lot of value.  Documentation helps you to navigate a system, and understand why something was built.

So what to do?

I think the first thing to do is to distinguish between transient and permanent documentation.

Transient documentation is correct at a point in time, and very often only exists on a whiteboard.  It serves to help think about a solution, but has a very short life span. After it has served a purpose, it gets to rest in peace in some archive.

Permanent documentation, however, continues to exist alongside the system, a living description that helps one to understand what was built.  Permanent documentation needs to be owned and managed.  And being managed, needs to be kept up to date.  This means that if a project impacts the system, the owner(s) of the permanent documentation of that system need to be notified, so that the documentation can be updated. Of course this is costly, so you do it only to documents you care about.

But this is not the end of the story.  Generating navigable documentation from your systems is a powerful way of supplementing your formal documentation.  For example:

  • using tools like Swagger to generate invokable service interface documentation.  
  • using deployment scripts to publish deployment information to configuration management database systems such as Service Now.
  • using application performance monitoring technology like AppDynamics to show system deployments and interactions.  
  • using standardised logging and log aggregation (operational intelligence) tools like Splunk to help trace message flows through systems.



Share:

Evil Session Tokens

So we build a new web application, SIMPLES.COM.  Clients login, over HTTPS of course, and a session token is issued.  Since we don't want the client to have to log in frequently, we give the token a long lifetime, or at least a way of using it to get a new one.

The application will have to persist the token on the browser, so cookies are used.  We consider using browser local storage, but cookies seemed the best way to guarantee wide browser compatibility.

The application then validates every HTTP operation, using the session token in the cookie, and authorises access as appropriate.

Sorted.

Enter the hacker.

Client Joe Bloggs receives an email with a phishing link to dodgy website S1MPLES.COM (note the name is different), and clicks on it.  Joe is taken to a page where a few sneaky GET requests are sent to the real SIMPLES.COM website, and Joe's browser helpfully supplies the session cookie, since it's the correct domain, and the hacker now has access to Joe's data.

How do we fix this?

The first thing to do is not use cookies for authentication.  The web app must attach a session token header to each request.  That way the above Cross-site Request Forgery attack is not possible.

However, we now have a long-lived session token being passed around, but that's OK, because we use HTTPS.

Except we don't.  Not all the time.

SIMPLES.COM is accessible over HTTP.  Only the login page and secure site are via HTTPS.

So Joe, being a lover of coffee, and free WiFi, gets caught by a man-in-the-middle attack by Mr Evil with a portable WiFi router.  Mr Evil intercepts the requests, and the session session token sent with each request.

Mr Evil now has full access to Joe's account, for a long time.

The answer is to make SIMPLES.COM full HTTPS, and to use HSTS to ensure no opportunity for man-in-the-middle exists. And to add all the recommended security headers, e.g. content security policy.

Cool.  Mr Evil shrugs, and picks on someone else.

Much later, Mr Bored Developer is browsing through the application logs (helpfully made available via a log aggregator), when he notices that the session tokens are logged for all to see.

Not so cool.

So what do we do?

One way is to use an OAuth approach to issue not one session token, but two: a short-lived access token, and a long-lived refresh token.  The refresh token is stored in the application browser,  but is never used for access, only to request new access tokens.

Of course these access tokens could still be leaked, but being short-lived, should expire very quickly.

Much better.

P.S. Please, please don't build your own security solutions!  Spring Security and mature open source identity and access management systems such as OpenAM are a much better way to go.

Share:

Application Security Function

Application Security is the software development concern of proactively ensuring that the applications being built, and integrated with, are secure.  This will require that application security becomes a standard focus for all software development teams, along with delivery, architecture, and quality assurance.

I suggest that this needs to be achieved through:

Education

Everybody (including the business and delivery) needs to understand the importance of security.  At IG we had a very positive recent experience with a consultant who came into our offices, spent some time with our development teams, educating and instilling enthusiasm for the subject, and then closing with a company-wide demonstration of our application vulnerabilities at the time.  The presence of C-level executives at these demonstrations lead ultimately to the creation of an application security function (in addition to our already quite mature InfoSec function).

Standardisation

Adopt industry guidelines such as OWASP to ensure a consistent, best-practise approach to security.

Organisation

Security, like quality, does not happen by accident, and requires organized effort to achieve. Create a team of security champions, whether physical or virtual, to:
  • collaborate on application security decisions
  • raise awareness of application security best practice in development teams
  • help teams understand application security threats via threat modeling
  • help teams secure their applications via security test suites
  • provide a developer communication and feedback loop on security matters
  • collaborate closely with InfoSec, PMO and Operations to ensure appropriate goal alignment - resourcing security work will be a key challenge

Process

Integrate security with your software development lifecycle, specifically:
  • create an effective security monitoring, incident, tracking and resolution process
  • prioritise issues using the OWASP risk rating framework
  • require teams to maintain security threat models for their applications
  • create security cheat sheets and code review checklists
  • create automatic security test suites

Testing

Testing is the only way to confidently assert that an application meets its requirements, and this is no different for application security. All applications should be required to have automated security test suites with adequate coverage. In addition, periodic, independent 3rd party penetration tests and architecture reviews should be performed.

The Challenge

Doing security is like taking out an insurance policy.  You don't have to do it, and might get away with it, but can you afford the consequences if you don't?

What do you have to lose?

Probably a lot.

Share: