Category Archives: API Development

The Backends for Frontends (BFF) Pattern

Many textbooks assume that your frontend will just call a single, beautiful, and secure API, but in the real-world this is rarely the case…

For example, imagine a bank that offers both banking and insurance to its customers. The bank has acquired the insurance business, which already had an insurance solution. The customer master data is stored in a CRM system. So if we were tasked with developing a customer self-service portal for the bank, the portal would need to call three different systems (CRM/Bank/Insurance), with three different protocols, and three different security schemes, just to get a basic overview of a customer’s engagement with the bank.

A common way to overcome this imperfect setup is to create a new backend in front of the real backend(s) and then design the perfect API for the frontend.

This is known as the Backends for Frontends (BFF) pattern and an example (using our imaginary bank) is shown below:

The Web BFF in the diagram above can expose a simple GET /customers/{id} REST operation that our frontend can call instead of dealing with the complexity of calling and integrating three different systems.

Beside respecting the separation of concerns principle, by separating our presentation logic from our integration logic, we get many more benefits from using the BFF:

  • Call the backends in parallel: We can call the backends in parallel and perhaps respond faster to the end user.
  • Filter the responses: We can remove internal or sensitive data, such as an unwanted customer flag, or unnecessary data that just adds complexity to the frontend and drains battery power when parsing on mobile devices.
  • Transform the responses: We can transform the responses to the frontend into something more usable, such as translating internal codes to something more descriptive. For example, a code in a job field can be translated from SH_CLERK to Shipping Clerk.
  • Enhanced security: We can add extra security in the BFF, such as OAuth2, to protect the unsecure backend solutions, and we can implement a single sign on, so the frontend doesn’t need to deal with different authentication methods in different backend systems.
  • Handle different protocols: The BFF can call FTP, SOAP, REST, GraphQL, and other types of services, but still use a single protocol when interacting with the client.
  • Encapsulate advanced business logic: Issuing a new insurance policy may be a complex operation that requires multiple service calls, but this can be simplified for the clients into a single end point with only the absolute minimum data in a flat structure.
  • Caching across clients: Some of backends may be slow so caching across different clients may be an effective way to deliver acceptable response times to the end users. Moreover, if we need to call usage-based external APIs, it may also be worth caching those responses across clients to reduce the cost.
  • Protect the client from changes to the backend: Changes in the backends APIs need not to result in changes in the UI, the changes can be handled in the BFF.
  • It’s still just code: A BFF doesn’t involve canonical data models, ESBs, or other old-school integration patterns that are complex and time consuming to deal with; it’s just plain code. Moreover, a change in the BFF will not affect other systems, like with an ESB, so it is better aligned with DevOps and microservice thinking.

The main drawback of the BFF pattern is that, at least in the beginning, it can seem like extra work for the frontend team (which should be the ones developing and owning the BFF) but it usually pays back if the underlying backends are non-trivial. Remember that there are no technical restrictions to what language / framework we can use for the BFF; if the frontend team is skilled in JavaScript, they may be more comfortable with writing the BFF in Node.js and that’s perfectly fine.

Don’t Limit Your REST API to CRUD Operations

I think one of the best things about RESTful web services is the Collection Pattern. It’s a really smart and developer-friendly way of designing a REST service.

For example, the Task REST API below, which is taken from my company’s REST API, is a typical example of how the collection pattern looks:

The collection pattern is so widely adopted that even a REST newbie, who has never seen this API before, will be able to guess the description in the Task column based on the content in the Method and Path columns.

The collection pattern is also really smart from a code perspective, because many frameworks have some sort of Active Record implementation on top of the collection pattern, so the framework can automatically wrap the whole REST service in a convenient way.

For example, in the old AngularJS framework they had this wonderful $resource factory that you could simply give the URI of a REST service that followed the collection pattern – and then the $resource factory would automatically figure out the rest (no pun intended!)

var Task = $resource('/tasks/:taskId', {taskId: '@id'}); 

var task = new Task(); 
task.description = "Put a man on the moon."; 
task.assignee = "James Webb"; 

While the collection pattern is really clever and so easy to use – and the best solution for almost all REST services – there are some edge cases where its CRUD approach just doesn’t make sense and other patterns should be considered.

Now it gets controversial…

When the web was the new big thing – and online pet shops were worth hundreds of millions of dollars – you would see web pages with HTML forms like this: 

<form method="POST" action="send_mail.cgi">
    <input type="text" name="subject">
    <input type="text" name="message">
    <input type="submit" value="Send Message">

While you can long for those innocent days when you could publish code like that on the web and not be flooded with spam; the important thing here is that you can also call the send_mail.cgi script directly and use it as a web service; for example, using the small JQuery script below:

var mail = {
    subject: "Man walks on the moon",
    message: "Armstrong and Aldrin become the first men on the moon..."

    url: "/send_mail.cgi",
    contentType: "application/x-www-form-urlencoded",
    data: mail, success: function() {
        console.log("Mail sent!");

Now, I will argue that send_mail.cgi is a RESTful web service (!) even if it’s a really poorly designed one and a simple POST /mails service would have been a lot nicer!

If you finish the demanding, yet satisfying task of reading Roy Fielding’s PhD thesis, which defines the REST architectural style, you will see that it says nothing about limiting our REST services to CRUD operations and it also says nothing about limiting ourselves to the collection pattern…

In fact, Fielding later wrote a blog post about the use of the POST method in REST, and said, “As long as the method is being used according to its own definition, REST doesn’t have much to say about it.”  And if we read the HTTP specification we can see that it doesn’t limit the use of POST to adding new items to collections – and if we read the URI specification we can see that it doesn’t limit our URI naming to plural nouns…

So POST /send_mail.cgi is OK from a specification point of view and can be considered RESTful…

So what are you saying?

So what am I saying? Is this the sacking of Rome and we can now all go crazy with POST /add-new-order.cgi and GET /find-my-orders.xml. No laws! No limits!

Of course not. I still think that the Collection Pattern is the right choice for almost all RESTful web services – and it should be the default choice for any new RESTful web service – because it’s so widely adopted and easily recognizable by most API users.

However, there are edge cases where it makes sense to use other patterns, such as the Controller Pattern. For example, if I have a REST service for rockets then one does not simply just launch a rocket (or walk into Mordor!) because you need to provide launch codes and the rocket needs to go through multiple stages before actual take-off – and this goes way beyond just changing the value of a field in the resource representation. So for this scenario I would add a controller subresource:


When breaking with the Collection Pattern I really like the verb-noun naming of the URI, such as launch-rocket. This is without doubts because I read Code Complete way too many times (!) but also because it makes it obvious to the API user that it isn’t a part of the Collection Pattern. On top of that, remember to add a link to controller subresource in the resource representation to make the API user aware that the subresource exists:

    "id": 43,
    "name": "Apollo 11",
    "state": "Ready for launch",
    "_links": {
        "self": {"href": "/rockets/43"},
        "launch-rocket": {"href": "/rockets/43/launch-rocket"} 

This meaning of this post isn’t to say that the Collection Pattern is bad. In fact, it’s the right choice for almost all REST APIs and everybody will love you for using it! 🙂 The purpose is to say that Collection Pattern != REST and you still have some wriggle room for edge cases that doesn’t fit neatly into the Collection Pattern without losing your API’s RESTfulness or the desirable properties that come with this architectural style.

How to Model Workflows in REST APIs

REST Services are awesome for performing basic CRUD operations on database tables, but they get even more exciting when you realize that they can also be used for modeling workflows and other advanced scenarios.

To show this point, let’s take the blog post workflow below and expose it as a RESTful Web Service:

There are lots of ways you can implement this, so in the following sections we will try three different approaches and take a look at the pro’s and con’s of each of them.

Let’s get started!

1. Use an Attribute for the Workflow’s State

The easiest way to model the blog workflow is just to store the workflow state as an attribute on the blog post resource:

    "id": 54301,
    "status": "Draft",
    "title": "7 Things You Didn’t Know about Star Wars",
    "content": "George Lucas accidentally revealed that…"

If a client wants to move the blog post to a new state, it just updates the status attribute to the desired state.

It’s an easy solution and the popular blogging software WordPress is basically using this approach in their REST API.

But when you start to dig deeper into it, you realized that it comes with some serious drawbacks.

The first is that the front-end engineer who writes the client code is forced to search through the API documentation to see what values can be used in the status attribute. This conflicts with the idea that REST Services should be self-describing and not relying on out-of-band documentation.

But this drawback can be fixed by adding a metadata service where the client can get a list of all legal values for the status attribute.

A more serious drawback is that the engineer also needs to look in the API documentation to see what workflow transitions are possible (i.e. can you jump directly from Draft to Published?) and code all these workflow rules in the client code.

This means that business logic is leaking into the client, so if there are many different types of clients (mobile apps, websites, etc.) then each client will be forced to re-implement the workflow logic in their own code, which is not a cost-effective way to do software development.

But even worse, it breaks the fundamental software engineering principle of Don’t Repeat Yourself (DRY) and it violates the separation of concerns between the client and server, which makes it even harder to maintain and evolve the software.

2. Use Hyperlinks for Workflow Transitions

So what should you do if you have transition rules in your workflow, but you don’t want all the bad stuff I mentioned in the previous section?

An alternative approach is to model each workflow transition as a subresource and let clients to use HTTP’s POST method on these subresources to perform the transition. This is inspired by the action pattern in PayPal’s API Standards.

On top of the subresources, you add hyperlinks in the response to let the client know what workflow transitions are possible in the current state.

So with this approach, the response will look like this:

    "id": 54301,
    "status": "Draft",
    "title": "7 Things You Didn’t Know about Star Wars",
    "content": "George Lucas accidentally revealed that…",
    "_links": {
        "sendToReview": {
            "description": "Send to Review",
            "href": "/posts/54301/review"

The smart thing is that the _links section is automatically updated to show what workflow transitions are available in the current state. So in the example above, you can see that the blog post is in the Draft state, and from there you can make the Send to Review transition to move the post to the Review state.

So if you call POST /posts/54301/review, you move the blog post to the Review state, and then the server will update the _links section to show what workflow transitions are possible in this new state:

    "id": 54301,
    "status": "Review",
    "title": "7 Things You Didn’t Know about Star Wars",
    "content": "George Lucas accidentally revealed that…",
    "_links": {
        "publishPost": {
            "description": "Publish Post",
            "href": "/posts/54301/publish"
        "rejectPost": {
            "description": "Reject Post",

The benefit of this solution is that clients no longer need to implement workflow logic in their own code, which means that business logic is no longer leaked into them.

It also reduces the risk of poorly constructed links in the client code — which is a frequent cause of defects in REST clients — because the clients get the links from the server.

Another really cool thing is that if the client needs to display a Next Action menu, it can simply loop through the values in the _links section and use them as menu items.

Finally, it also fits nicely with REST’s goal of self-discovery and HATEOAS.

The drawback compared to the approach in the previous section is that the response is bigger, because it includes the _links section. Another drawback is that the interaction between the client and server has become a little more chatty, because you now need two requests if you want to update a blog post and send it to review.

But I think both of these drawbacks are pretty minor compared to what you get out of it.

A more serious concern is that if you have an advanced workflow, you might end up with a massive number of subresources (i.e. one for each workflow state), which might look a bit messed up.

Another concern is that the server will need to know all states at design time to create subresources for them. This won’t be a problem for most workflows, but if you offer sophisticated workflow functionality where users can customize the states and transitions to fit their special needs it could be problematic.

3. Use a Subresource for Workflow Transitions

So how do you model customizable workflows?

For inspiration, let’s take a look at the issue-tracking tool JIRA, which allows (admin) users to configure their own workflows at runtime. How do they expose this in their REST API?

On their issue resource, they added a transitions subresource where the client can get a list of possible transitions from the issue’s current state. The client can then take one of these possible states and make a POST to the same subresource to transit to that state.

I like their approach, but think it’s a little naughty that they use the same subresource for two different things (i.e. list potential states, and change the current state).

So to use this approach for our blog post workflow, you can add a subresource with potential transition changes (you could also call it “transactions“ or “actions“ depending on your preferences):

GET /posts/{id}/availableTransitions

For a blog post in the Review state, it will return something like this:

    {"transition":"Publish Post"},
    {"transition":"Reject Post"}

If you want to do a transition, you grab one of the possible transitions from the array and POST it to the transitions subresource:

POST /posts/{id}/transitions

    "transition": "Publish Post"

A cool thing about this approach is that if you need more advanced workflow transitions, you can add more attributes to the transition subresource. For example, when you transit to the Review state, you might also want to specify a reviewer and a comment:

POST /posts/{id}/transitions

    "transition": "Send to Review",
    "reviewer": "Han Solo",
    "comment": "Plz review this faster than you did the Kessel Run!"

Another interesting possibility is that if the history of transitions is important (for audit), you could enable a GET method on the transitions subresource to get a full list of all transitions that have been performed on the resource.

You could also decide to execute the transitions asynchronous by returning a 202 Accepted status code and a link where the client can poll the latest status. This could be useful in a money transfer between banks where the actual transfer happens in a nightly batch.

Prakash Subramaniam even goes as far as playing with the idea that you should drop PUT all together, and only allow changing resources through a transition subresource. The good thing is that it neatly separates the interface into a query and command part (as per the CQRS pattern) and you have a strong audit trail of what has happened to the resource.

The drawback is that there are (many) scenarios where it’s total overkill to perform a transaction. For example, to edit the title of a blog post. A live blog post editor would probably end up doing so many transactions that it would overwhelm any kind of history log. But for something like a bank account, it makes good sense to make each update inside a transaction, so you have a complete audit trail.

So which one should I pick?

So we came to the unavoidable question: What approach is the best one?

As always, it depends on the context, but here are some quick guidelines:

1. State AttributeWhen there are no restrictions on the transitions. You can go from any state to any state at any time. The states are basically nothing more than a list of values.
2. Transition LinksThere are limits to which states you can go to depending on the current state.
3. Transition SubresourceThe workflow is configurable by users, so states and transitions among them are not fixed, but can be changed at runtime.

So use the Einstein rule (i.e. make things as simple as possible, but not simpler) and start with the first approach, and only consider number two or even number three if there is an undeniable need for them.

That’s all for now. Thank you for reading!

4 Must-Read Articles on Developer Experience (DX)

We live in an API Economy where APIs are more important to business success than ever before. Companies are digitalizing their businesses at a breathtaking rate, and they use APIs to integrate customers and partners into their new digital business processes.

A result of this is that Developer Experience (DX) — which is all about using User Experience (UX) techniques to make life easier for third-party developers calling your public APIs — is fast becoming essential to remain competitive in the digital economy.

But how in the world do you apply UX techniques, such as personas, prototypes and usability testing, to the developer experience?

I personally think this is a really exciting topic and I’ve read hundreds of good articles about the topic, but if I had to choose the best of the best, I would pick these four:

  1. Why API Developer Experience Matters More Than Ever — This is the best intro that I have seen to developer experience.
  2. User Personas for HTTP APIs — A collection of example personas for a REST API. It is really an eye opener to see how many different types of users there can be of a single API, and how different their needs are.
  3. Patterns of Developer Experience — Great post about the principles and patterns for effective DX. The DX Pattern collection at the end of the post is a must-have reference library.
  4. Building Effective API Programs: Developer Experience (DX) — If you want to go all the way and create a Developer Program then this article lists what such a program should include and gives an example of what it looks like in the real world.

Happy reading!

Image credit: Tomek Paczkowski

Write Beautiful REST Documentation with Swagger

Swagger is the most widely used standard for specifying and documenting REST Services.

The real power of the Swagger standard comes from the ecosystem of powerful tools that surrounds it.

For example, there’s Swagger Editor for writing the Swagger spec, Swagger Codegen for automatically generating code based on your Swagger spec, and Swagger UI for turning your Swagger spec into beautiful documentation that your API users will love to read.

Why use Swagger?

But why not use another standard (like RAML) or simply open your favorite word processor and start hitting the keys?

There are 5 good reasons for using Swagger:

  1. Industry Standard: Swagger is the most widely adopted documentation and specification standard for REST Services. This means that it’s already used in real production APIs, so you don’t have to be the beta tester. It also means that the API user has probably already experience with Swagger, which dramatically reduces the learning curve.
  2. Designed for REST: Swagger is really easy to use, because it’s a single-purpose tool for documenting REST Services. So most of the complicated things, like security or reusing resource definitions across several methods, are already handled gracefully by the standard.
  3. Huge Community: There’s a great community around Swagger, so when you face a problem, you can usually just Google the solution.
  4. Beautiful Documentation: The customer-facing documentation looks really nice. Plus there is a built-in way to actually call the services, so the API user won’t need to use an external tool to play around with the services, but can just do it inside the documentation.
  5. Auto-generate Code: You can auto-generate client and server code (interface part) based on the Swagger spec, which makes sure that they are consistent. You could even make your own tools.

How to get started with Swagger?

To start writing a Swagger spec, you simply open the online Swagger Editor and start writing according to the Swagger specification.

You can see a screenshot of the Swagger Editor below. You write your spec in the left-hand side, and you can see the resulting documentation in the right-hand side:

For this post, I’ve created a Swagger specification for the Movie REST Service, which Sandeep Panda developed as part of his post on Angular’s $resource.

If you want to play with the example I use in this section:

  1. Open the Swagger Editor.
  2. Open the “File” menu, and select “Import URL…”
  3. Enter in the box.

Now let’s walkthrough the example spec!

Part 1: General Information

The first thing that you will notice is that Swagger is written in YAML, which is a format that is very easy to read — even for non-technical people.

In the top part of the Swagger specification, you write all the general stuff about your API:

swagger: '2.0'

#                            API Information                               #
  version: "v1"
  title: REST API for 'The Movie App'
  description: |
    The is a demo Swagger Spec for the sample REST API used by The Movie App that Sandeep Panda developed as part of his great blog post [Creating a CRUD App in Minutes with Angular's $resource](

basePath: /api

Here is an explanation of some of the properties:

  • swagger: This is to say we use Swagger 2.0. It should always be “2.0”.
  • title: The title of your API documentation.
  • description: A description of your API. It is always nice with examples.
  • version: The version of your API (remember that for APIs a low version number is always more attractive, because a high number indicates an unstable interface and hence an extra burden on the clients using it.)
  • host: The server where your REST API is located.
  • basePath: The path on the server where your REST API is located.

Part 2: REST Services

In the middle part, you define the paths and HTTP Methods.

I have only included PUT below, but you can see the rest in my Swagger file.

#                                     Paths                                #
      summary: Update a movie
        - application/json
        - application/json
        - in: path
          name: id
          type: number
          description: The id of the movie you want to update.
          required: true
        - in: body
          name: movie
          description: The movie you want update with.
          required: true
            $ref: '#/definitions/Movie'
          description: The movie has been successfully updated.
          schema: $ref: '#/definitions/Message'

Below paths you define a path (e.g. /movies/{id}) and then you define the HTTP methods (e.g. PUT) that the path can be used with.

  • summary: A short description of the service. There is also a description property for a more lengthy description, if necessary.
  • consumes: The content type of the data that the service consumes (you can have multiple types). The most common is application/json.
  • produces: The content type of the data that the service produces (you can have multiple types). The most common is application/json.
  • parameters: The different parameters that the service accepts. It is both parameters in the HTTP header, URI path, query string and HTTP request body.
    • in: Where is the parameter located? In the path, in the body, in a header, or somewhere else?
    • name: The name of the parameter.
    • type: The data type of the parameter. The common types are number and string.
    • description: A short, user-friendly description of the parameter.
    • required: Is the parameter required or optional?
  • responses: The possible responses that the service can return.
    • (HTTP Status Code): You first specify the HTTP Status Code (e.g. 200).
      • description: A short description of when this response happens.
      • schema: A definition of the response object (see next section for details).

Part 3: Resource Definitions

In the last part of the Swagger spec, you have shared resource definitions.

Given that the movie resource representation is used in almost all methods, it makes sense to write the resource definition in a single place and reuse it across the methods.

#                               Definitions                                #
    type: object
        type: number
        description: A unique identifier of the movie. Automatically assigned by the API when the movie is created.
        type: string
        description: The official title of the movie. 
        type: string
        description: The year that the movie was released.
        type: string
        description: The director of the movie.
        type: string
        description: The genre of the movie.
        type: number
        description: An internal version stamp. Not to be updated directly.

Below definitions you define a resource type (i.e. Movie) and then you define its properties below:

  • type: The data type of the property. The common ones are string and number. The advanced types are objects and arrays.
  • description: A description of the property.
  • properties: If the data type is an object, you specify the object’s properties below.

If you need to define complex JSON objects, you can be inspired by the great examples found in Swagger Editor. You can find them by opening the “File” menu, and select “Open Example…”

How to turn your Swagger spec into API Documentation

Once your Swagger spec is stable — and your REST API is operational — you can publish your Swagger spec as customer-facing documentation.

For this purpose you can use Swagger UI, which converts your Swagger spec into a beautiful, interactive API documentation (you can see an online example here).

You can download Swagger UI from here. It is just a bundle of HTML, CSS and JS files, which doesn’t require a framework or anything, so they can be installed in a directory on any HTTP server.

Once you have downloaded it, you put your swagger.yaml file into the dist directory — and open index.html and change it to point at your swagger file instead of

Then you can open index.html in your browser, and see your new beautiful, interactive API documentation:

That’s it! Now you have learned all the basic elements of Swagger. Don’t forget to read Swagger specification if you really want to become a Swagger expert.

The SQL Developer’s Guide to REST Services

This is a practical guide (with lots of examples) to help SQL developers quickly learn the basics of RESTful Web Services.

Data Storage: Tables versus Resources

Both SQL and RESTful Web Services are centered around data.

In SQL, data is normally stored in tables, but in REST Services it is stored in resources.

For example, in a database you could have a customer table:

SQL> SELECT * FROM customer;

-- ---------- --------- -----------
1  Luke       Skywalker Jedi Master
2  Leia       Organa    Princess

In REST Services, you would have a /customers resource instead of a customer table.

For example, if you want to get all customers (similar to the SQL statement above), you do it like this:

GET /customers

The response to this request would be a JSON array with an object for each customer:

    "id": 1,
    "firstName": "Luke",
    "lastName": "Skywalker",
    "occupation": "Jedi Master"
    "id": 2,
    "firstName": "Leia",
    "lastName": "Organa",
    "occupation": "Princess"

CRUD Operations

The HTTP methods, which are used for RESTful Web Services, map neatly to the common SQL statements:

CRUD Operation HTTP Method SQL Statement

The following sections will explain each of them in more details.


To create a new customer, you use the INSERT statement in SQL. For example:

INSERT INTO customer (first_name, last_name, occupation) 
     VALUES ("Han", "Solo", "Smuggler");

In REST, you create a new customer by sending a POST request with the new customer as a JSON object:

POST /customers

  "firstName": "Han",
  "lastName": "Solo",
  "occupation": "Smuggler"


To read data in SQL, you use the SELECT statement.

For example, to get a complete list of all customers, you simply call:

SELECT * FROM customer;

The corresponding HTTP command is GET, which you can call like this to the same result:

GET /customers

If you want to lookup a specific customer using the primary key, you would do it like this in SQL:

SELECT * FROM customer

WHERE id = 2;

In REST you would append the id to the REST resource:

GET /customers


But what if you want to lookup something using a non-primary key?

In SQL you would just add a WHERE clause to your SELECT statement:

SELECT * FROM customer

WHERE first_name = “Luke”;

In REST, you append a query parameter to the GET statement:

GET /customers


Note: The specific query parameters available depend on the REST service you are using.

You may want to limit the number of fields returned by a query, because you don’t need to display all the fields, or because you want to improve performance.

In SQL, you just specify what columns should be returned:


first_name, last_name

 FROM customer;

In REST, you request a partial response:

GET /customers


Note: Partial responses are not available in all RESTful Web Services, but usually in those where performance is key. For example, mobile apps that may need to operate in an environment with limited bandwidth.


If you want to update all columns on a customer via SQL, you use the UPDATE statement:

UPDATE customer
   SET id = 2, 
       first_name = "Leia", 
       last_name = "Organa", 
       occupation = "General"
 WHERE id = 2;

In REST, you do the same by using the PUT method:

PUT /customers/2

  "id": 2, 
  "firstName": "Leia", 
  "lastName": "Organa", 
  "occupation": "General"

But what if you only want to update some of the fields?

In SQL you simply limit the fields to those you want to update:

UPDATE customer
   SET occupation = "General"
 WHERE id = 2;

In REST, you use the PATCH method:

PATCH /customers/2

  "occupation": "General"

Note: The difference between PUT and PATCH is that PUT must update all fields to make it idempotent. This fancy word basically means that you must always get the same result no matter how many times it is executed. This is important in network traffic, because if you’re in doubt whether your request has been lost during transmission, you can just send it again without worrying about messing up the resource’s data.


If you need to delete a customer, you use the DELETE statement in SQL:

DELETE FROM customer WHERE id = 2;

Similar, in REST you use the DELETE method:

DELETE /customers/2

That’s it! This is my attempt to map the key concepts in RESTful Web Services to the corresponding key concepts in SQL. If you understand these, you already got a pretty good headstart towards learning REST Services.

What are RESTful Web Services?

To put it mildly, the World Wide Web was an unexpected success.

What had started out as a convenient way for research labs to connect with each other suddenly exploded in size. Jakob Nielsen estimated that between 1991 and 1997 the number of web sites grew with a staggering 850% per year each year!

This incredible growth worried some of the early web pioneers, because they knew that the underlying software was never designed with such massive amount of users in mind.

So they set out to define the web standards more clearly, and enhance them so that the web would continue to flourish in this new reality where it was suddenly the world’s most popular network.

One of these web pioneers was Roy Fielding, who set out to look at what made the internet software so successful in the first place and where it was lacking, and in his fascinating PhD dissertation he formalized his findings into six constraints, which he collectively called REpresentional State Transfer (REST).

Fielding’s observation was that if your architecture satisfies these six constraints then it will exhibit a number of desirable properties (like scalability, decoupling, simplicity), which are absolutely essential in an Internet-sized system.

His idea was that the constraints should be used as a checklist to evaluate new potential web standards, so that poor design could be spotted early, and way before it was suddenly deployed to millions of web servers.

He successfully used the constraints to evaluate new web standards, such as HTTP 1.1 (where he was one of the principal authors) and URI (where he was also one of the authors). These standards have both stood the test of time, despite the immense pressure of being essential protocols on the web and used by billions of people each day.

So a natural question to ask is that if following these REST constraints lead to such great systems, why only used them for browsers and web sites? Why not also create web services that conform to them, so we can enjoy the desirable properties that they lead to?

This thinking led to the idea of RESTful Web Services, which are basically web services that satisfy the REST constraints, and are therefore well-suited for Internet-scale systems.

So what are these 6 REST constraints?

1. Client-Server

The first constraint is that the system must be made up of clients and servers.

Servers have resources that clients want to use. For example, a server has a list of stock prices (i.e. a resource) and the client would like to display these prices in some nice graphs.

There is a clear separation of concerns between the two. The server takes care of the back-end stuff (data storage, business rules, etc.) and the client handles the front-end stuff (user interfaces).

The separation means that there can be many different types of clients (web portals, mobile apps, BPM engines, etc.) that access the same server, and each of these can evolve independently of the other clients and the server (assuming that the interface between the clients and server is stable).

The separation also seriously reduces the complexity of the server, as it doesn’t need to deal with UI stuff, which improves scalability.

This is probably the least controversial constraint of REST as client-server is so ubiquitous today that we almost forget that there are other styles to consider (like event-based protocols).

It important to note that while HTTP is almost always used when people develop RESTful Web Services, there is no constraint that forces us to use it. We could use FTP as the underlying protocol, if we really wanted. Even though intellectual curiosity is probably the only good reason for trying that.

2. Stateless

To further simplify interactions between clients and servers, the second constraint is that the communication between them must be stateless.

This means that all information about the client’s session is kept on the client, and the server knows nothing of it (so no cookies, session variables, or other naughty stuff!) The consequence is that each request must contain all information necessary to perform the request (i.e. it cannot rely on any context information).

The stateless constraint simplifies the server as it no longer needs to keep track of client sessions, resources between requests, and it does wonders for scalability because the server can quickly free resources after requests have been finished.

It also makes the system easier to reason about as you can easily see all the input data for a request and what output data it resulted in. You no longer need to lookup session variables and other stuff that makes the system harder to understand.

In addition, it will also be easier for the client to recover from failures, as the session context on the server has not suddenly gotten corrupted or out of sync with the client. Roy Fielding even goes as far as writing in an old newsgroup post that reliance on server-side sessions is one of the primary reasons behind failed web applications and on top of that it also ruins scalability.

So far nothing too controversial in the constraints. Many RPC implementations could probably satisfy both the Client-Server and Stateless constraints.

3. Cache

The last constraint on the client-server communication is that responses from servers must be marked as cacheable or non-cacheable.

An effective cache can reduce the number of client-server interactions, which contributes positively to the performance of the system. At least, from a user’s point of view.

Protocols, like SOAP, that only uses HTTP as a convenient way to get through firewalls (by using POST for all requests) miss out on the improved performance from HTTP caching, which reduces their performance (and also slightly undermines the basic purpose of a firewall.)

4. Uniform Interface

What really separate REST from other architectural styles is the Uniform Interface enforced by the fourth constraint.

We don’t usually think about it, but it’s pretty amazing that you can use the same Internet browser to read the news, and to do your online banking. Despite these being fundamentally different applications. You don’t even need an extension to the browser to do any of this!

We can do this because the Uniform Interface decouples the interface from the implementation, which makes interactions so simple that it’s easy for somebody familiar with the style to understand it, even automatically (like Googlebot).

The Uniform Interface constraint is made up of 4 sub-constraints:

4.1. Identification of Resources

The REST style is centered around resources. This is unlike SOAP and other RPC styles that are modeled around procedures (or methods).

So what is a resource? A resource is basically anything that can be named. From static picture to a feed with real-time stock prices.

But in enterprise software the resources are usually the entities from the business domain (i.e. customers, orders, products, etc.) On an implementation level, it is often the database tables (with business logic on top) that are exposed as resources. But you can also model a business process or workflow as resource.

Each resource in a RESTful design must be uniquely identifiable via an URI (Uniform Resource Identifier) and the identifier must be stable even when the underlying resource is updated (i.e. “Cool URIs don’t change”).

This means that each resource you want to expose through a RESTful web service must have its own URI. Normally, you would use the first URI below to access a collection of resources (i.e. several customers) and the second URI to access a specific resource inside that collection (i.e. a specific customer):


Some well-known APIs that claim to be RESTful fail this sub-constraint. For example, Twitter’s REST APIs uses RPC-like URIs like statuses/destroy/:id and it’s the same with Flickr.

The problem is that they break the Uniform Interface requirement, which adds unnecessary complexity to their APIs.

4.2 Manipulation of Resources through Representations

The second sub-constraint in the Uniform Interface is that resources are manipulated through representations.

This means that the client does not interact directly with the server’s resource. For example, we don’t allow the client to run SQL statements against our database tables.

Instead, the server exposes a representation of the resource’s state. It can sound complicated, but it’s not.

It just means that we show the resource’s data (i.e. state) in a neutral format. This is similar to how the data for a web page can be stored in a database, but is always send to the browser in HTML.

The most common format for RESTful web services is JSON, which is used in the body of the HTTP requests and responses:

  "id": 12,
  "firstname": "Han",

When a client wants to update the resource, it gets a representation of that resource from the server, updates the representation with the new data, send the updated representation to the server, and ask the server to update its resource so it corresponds with the new representation.

The benefit is that you avoid a strong coupling between the client and server (like with RMI in Java), so you can change the underlying implementation without affecting the clients. It also makes it easier for clients as they don’t need to understand the underlying technology used by each server that they interact with.

4.3 Self-Descriptive Messages

The third constraint in the Uniform Interface is that each message (i.e. request/response) must include enough information for the receiver to understand it in isolation.

Each message must have a media type (for instance, application/json or application/xml) that tells the receiver how the message should be parsed.

HTTP is not formally required for RESTful web services, but if you use the HTTP methods you should follow their formal meaning, so the user won’t rely on out of band information to understand them (i.e. don’t use POST to retrieve data, or GET to save data).

So for the Customer URIs, which we defined earlier, we can expose the following methods for the client to use:

Task Method Path
Create a new customer POST /customers
Delete an existing customer DELETE /customers/{id}
Get a specific customer GET /customers/{id}
Search for customers GET /customers
Update an existing customer PUT /customers/{id}

The benefit is that the four HTTP methods are clearly defined, so an API user who knows HTTP, but doesn’t know our system can quickly guess what the service is doing by only looking at the HTTP method and URI path (i.e. if you hide the first column, a person who knows HTTP can guess what it says based on the two new columns).

Another cool thing about self-descriptive message is that (similar to statelessness) you can understand and reason about the message in isolation. You don’t need some out-of-band information to decipher it, which again simplifies things.

4.4 Hypermedia as the Engine of Application State

The fourth and final sub-constraint in the Uniform Interface is called Hypermedia as the Engine of Application State (HATEOAS). It sounds a bit overwhelming, but in reality it’s a simple concept.

A web page is an instance of application state, hypermedia is text with hyperlinks. The hypermedia drives (i.e. engine) the application state. In other words, we click on links to move to new pages (i.e. application states).

So when you are surfing the web, you are using hypermedia as the engine of application state!

So it basically means that we should use links (i.e. hypermedia) to navigate through the application. The opposite would be to take an Customer ID from one service call, and then use it as an input parameter to another service call.

It should work like a good web site where you just enter the URI and then you just follow the links that are provided on the web pages. You don’t need to know more than the initial URI.

For example, inside a customer representation there could be a links section with links to the customer’s orders:

  "_links": {
    "self": {
    "orders": {

The service can also provide the links in the Link HTTP header, and W3C is working on a standard definition for the relation types, so we can use standardized meanings which further helps the user.

An enormous benefit is that the API user doesn’t need to look in the API documentation to see how to find the customer’s orders, so he or she can easily explore while developing without having to refer to out-of-band API documentation.

It also means that the API user doesn’t need to hardcode (and manually construct) the URIs that he or she wants to call. It might sound like a trivial thing, but Craig McClanahan (co-designer of The Sun Cloud API) wrote in an informative blog post that in his experience 90% of client defects were caused by badly constructed URIs.

Roy Fielding didn’t write that much about the hypermedia sub-constraint in his PhD dissertation (due to lack of time), but he later wrote a blog post where he clarified some of the details.

5. Layered System

The fifth constraint is another constraint on top of the Uniform Interface, which says that the client should only know the immediate layer it is communicating with, and not be aware of any layers behind it.

This means that the client doesn’t know if it’s talking with an intermediate, or the actual server. So if we place a proxy or load balancer between the client and server, it wouldn’t affect their communications and we wouldn’t need to update the client or server code.

It also means that we can add security as a layer on top of the web services, and then clearly separate business logic from security logic.

6. Code-On-Demand (optional)

The sixth and final constraint is the only optional constraint in the REST style.

Code-On-Demand means that a server can extend the functionality of a client on runtime, by sending code to it that it should execute (like Java Applets, or JavaScript).

I have not heard of any RESTful web services that actually send code from the server to the client (after deployment) and gets it executed on the client, but could be a powerful way to beef up the client.

A really nice feature of the simplicity that is enforced by these six constraints (especially, uniform interface and stateless interactions) is that the client code becomes really easy to write.

Most modern web framework can figure out what to do, if we follow the conventions above and they can take care of most of the boilerplate code for us.

For example, in the new Oracle JET toolkit, we simply need the JavaScript below to create a customer (and it would be just as easy in AngularJS):

// Only code needed to configure the RESTful Web Service
var Customer = oj.Model.extend({
  urlRoot: "",
  idAttribute: "id"

// Create a new customer representation
var customer = new Customer();
customer.attributes.firstName = "Han";
customer.attributes.lastName  = "Solo";

// Ask the server to save it {

And it’s just as easy to call the other HTTP methods.

So the front-end engineer just need to add a few more lines to add an HTML form where the user can enter the values, and voila we have a basic web app!

And if we use one of the many gorgeous UI frameworks (like Twitter’s Bootstrap or Google’s Materialize), we can quickly develop something really nice looking in a really short time.

That’s it for now. Thank you for reading and take good care until next time!

Boost Your REST API with HTTP Caching

It’s a core part of the REST architectural style to use caching!

That’s nice, you might think, but why should I use it?

Because it will allow you to show off against other API Designers by claiming that your REST services are twice as RESTful as theirs 😉

But more seriously, Roy Fielding, who invented the REST architectural style, didn’t add caching as a requirement just for the fun it! He added it because it can seriously boost performance, which is also shown in the numbers in Tom Christie’s great post on performance tuning of Django REST services.

So, how do you get started with HTTP caching?

It’s only for HTTP GET requests!

At first it may seem like an overwhelming task to implement HTTP Caching; especially if you have already developed a huge number of services.

The good news is that it’s only GET methods where you need to think about caching as it doesn’t really make much sense to cache POST, PUT or DELETE responses.

The even better news is that if you simply specify the right HTTP header then the browser will do all the heavy lifting for you!

Code, please!

That’s all very nice! But can you please show us the code?

Definitely, but the only code you need is the Cache-Control header in your HTTP response. There are a number of directives in this header you can use to control the caching:

Directive Description
max-age The maximum time that the cached response should be used (in seconds). The maximum value is 1 year.


Cache-Control: max-age=3600

Kyle Young writes that a rule of thumb is to use between 60 seconds and 1 hour for most content, but for pseudo dynamic content, use less than 60 seconds (or don’t cache it at all).

s-max-age This directive overrides max-age for shared cache, such as proxy servers. You usually have more control over the proxy cache than the client’s local cache, so you can add longer values here.


Cache-Control: max-age=0, s-max-age=3600

Thuva Tharma has some interesting thoughts on why s-max-age may be better than max-age.

public private Is the response specific to the client, so it cannot be used for other clients? For example, /tasks/myTasks is client-specific.

If the response is client-specific, use private. Otherwise, use public, which is also the default.


Cache-Control: private, max-age=3600
Cache-Control: public, max-age=3600
no-store This is used for sensitive data (like credit card details) that must not be stored in caches or proxies under any circumstances.


Cache-Control: no-store
no-cache The client must not use cached responses. Unless, it first sends a conditional GET (with an ETag) to the server to check if the data has been updated in the meantime.


Cache-Control: no-cache
must-revalidate If the cached response has expired, it must be revalidated at the server.

HTTP might under some circumstances serve cached responses that have expired (for instance, under poor network connectivity), but using this directive ensures that this won’t happen.


Cache-Control: max-age=3600, must-revalidate
proxy-revalidate Same as must-revalidate, but for proxy servers.


Cache-Control: s-max-age=3600, proxy-revalidate

So let’s say that the client sends a request for some metadata, and we want the client to cache it for 1 hour:

GET /customers/metadata HTTP/1.1
Accept: application/json
Accept-Language: en

To do this, we just add the Cache-Control header to our response:

HTTP/1.1 200 OK
Content-Type: application/json

Cache-Control: max-age=3600

Content-Length: 88
Etag: "6d82cbb050ddc7fa9cbb659014546e59"

  "languageCodes": [

As you can see it’s pretty easy to add caching to your RESTful services…

So if you have performance issues then HTTP caching could be the power tool you are looking for to seriously reduce your response times!

Avoid Data Corruption in Your REST API with ETags

There are few things worse than a really nasty data corruption issue.

Especially if it has occurred silently over a long period of time, so when it’s discovered it’s too late to rollback to a backup before the defect was introduced. It’s even worse if it has also occurred randomly, so there is no pattern to base your fix upon.

Yet an awful lot of REST APIs ignore concurrency control, so if they are used by multiple clients who modify the same data at the same time then it can lead to lost updates and stalled deletes, which slowly ruins the data in the database.

This is totally unacceptable in most enterprise applications where data integrity is something you just don’t fool around with…

So how can you avoid this messed up situation?

You use the concurrency control designed into the HTTP protocol as a simple, yet effective way to protect the integrity of your data.

Meet the ETag Header

If you want to use the concurrency control in the HTTP Protocol, you need to use the optional Entity Tag (ETag) header in the HTTP request.

The ETag is kind of like a version stamp for a resource and it’s returned as part of the HTTP response.

For example, if you send a request to get a specific customer:

GET /customers/987123 HTTP/1.1

Then the ETag (if used) is included in the header in the response:

HTTP/1.1 200 OK
Date: Sat, 30 Jan 2016 09:38:34 GMT

ETag: “1234”

Content-Type: application/json


Each time the resource is updated on the server, the ETag header will be changed to reflect the content of the new version of the resource.

So to avoid lost updates, you simply take the value of the ETag header and put into the If-Match header on the PUT request:

PUT /customers/987123 HTTP/1.1

If-Match: “1234”

Content-Type: application/json




The PUT request above says that you want to update the customer resource on the server, but only if the ETag matches 1234 to make sure that the customer hasn’t been updated since you sent the GET request. In this way, your request won’t incidentally overwrite other users’ updates.

Over at the Server

When the server gets the PUT request, it will execute the logic below:

First, the server checks that the If-Match header is included in the request. If not, it will tell the client that you cannot update this resource without the If-Match header.

Second, the server checks if the resource actually exists as it must exists before it can be updated.

Third, it checks if the ETag supplied in the If-Match header is the same as the latest ETag on the resource. If not, it tells the client that the precondition has failed.

Finally, if the request passes the three validations, then the server updates its resource.

If the server uses the same approach on DELETE request, you can also avoid stalled deletes.

Note: The logic above assumes that the server doesn’t allow you to create new resources with a PUT request. If this is allowed then the first step should be to check if the resource exists, and if not then branch out and create it and skip the other steps.

Implementation Hints

There are several ways to implement ETags on the server.

One way is to make a hash of the resource, and put in the ETag header. But you need to make sure that the hash includes all updatable fields in the response, be sure that there cannot be hash collisions, and find a hashing algorithm that doesn’t impact performance too much.

Another really simple way to implement ETags is to add a read-only etag column to the underlying database table and add a trigger that increases the value each time the row is updated. Of course, the server needs to be aware that the database changes this value behind the screen, so the server always uses the latest version.

Why not timestamps instead of ETags?

An easier way to implement concurrency control in HTTP is to use the Last-Modified and If-Unmodified-Since headers instead of ETag and If-Match. This difference is simply that these two headers use timestamps instead of ETags.

So, if it’s easier why not use it?

The problem is that the timestamps use seconds as their finest precision, so if you have fast, high-frequency updates then there is a risk that two updates occur within the same second and you lose one of them.

So in enterprise software where data integrity is an absolute requirement, Etags are the safe choice.

7 Tips for Designing a Better REST API

If you need to develop a REST API for a database-driven application, it’s almost irresistible not to use the database tables as REST resources, the four HTTP methods as CRUD operations, and then simply expose your thinly-wrapped database as a REST API:

The mapping between SQL and HTTP is deceptively simple.

The problem is that one of the foundations of the REST architecture is that the client-facing representation of a resource must be independent of the underlying implementation, and implementations details should definitely not be leaked to the client, which is all too easy with the database-driven approach.

It’s also important to ask yourself if an almost raw database is the best interface you can offer your API users? I mean there is already a near-perfect language for doing CRUD operations on database tables, it’s called SQL… And you probably have some business logic on top of those tables that your API users would appreciate not having to re-implement in their own code.

So how do you move beyond this database-oriented thinking and closer to a more RESTful design for your API?

Let’sThe mapping between SQL and HTTP is deceptively easy. find out…

1. Begin with the API User in Mind

Bestselling author and architect Sam Newman’s great book on microservices provides a powerful alternative to the database-driven approach for designing REST web services. It’s useful even if you don’t plan to use microservices.

Newman suggests that you divide your application into bounded contexts (similar to business areas). Each bounded context should provide an explicit interface for those who wish to interact with it. Implementation details of the bounded context that don’t need to be exposed to the outside world are hidden behind the interface.

You should use this explicit interface as the basis for your API design. Start by asking yourself what business capabilities do the API user needs, rather than what data that should be shared. In other words, ask yourself what does this bounded context do? and then ask yourself what data does it need to do that?

The promise is that if you wait with thinking about shared data until you know what business capabilities you need to offer, it will lead to a less database-oriented design.

I think his approach is a good way to jolt you out of the database-driven mindset, but you need to be careful that you don’t end up designing a REST-RPC hybrid.

What I also like about this approach is that it minimizes the interface and doesn’t expose all data by default, but hides internal data (like logging and configuration tables) from the client and instead focuses on what the client actually needs.

This also fits beautifully with veteran API designer Joshua Bloch’s maxim saying that When in doubt, leave it out (from his highly popular presentation on API design), and it also harmonizes with the REST principle that a representation of a resource doesn’t need to look like the underlying resource, but can be changed to make it easier for the client.

So feel free to think about what would be the easiest interface for the API user, and then let your resource take data from multiple tables and leave out columns that are irrelevant to the job that clients need to perform.

2. Use Subresources to Show Relationships

An attractive alternative to only using top-level resources is to use subresources to make the relationships between resources more obvious to the API user, and to reduce dependencies on keys inside the resource representation.

So how do you decide what resources should be subresources? A rule of thumb is that if the resource is a part of another resource then it should be a subresource (i.e. composition).

For example, if you have a customer, an order and an order line then an order line is a part of an order, but an order is not a part of a customer (i.e. the two exists independently and a customer is not made up of orders!)

So the URIs would look like this:



A different rule of thumb is to also include aggregations as subresources. That is, a belongs to relationship. If we use this rule then an order belongs to a customer, so the path would look like this:


So what rule should you pick?

The idea with subresource is to make your API more readable. For example, even if you don’t know the API you can quickly guess that POST /customer/123/orders will create a new order for customer 123.

However, if you end up with more than about two levels then the URI starts to become really long and the readability is reduced.

You also need to be aware that subresources cannot be used outside the scope of their parent resource. In the second example, you need a customer id before you can lookup a order, so if you want a list of all open orders (regardless of customer) then you cannot do it in the second example.

Ehh, so what to pick?

If you want a flexible API, aim for fewer subresources. If you want a more readable API, aim for more subresources.

The important thing is that whatever rule of thumb you pick then be consistent about it. I mean the API user might disagree with your decision, but if you are using it consistently throughout your API, he or she will probably forgive you.

3. Use Snapshots for Dashboard Data

A deal-breaker for using subresources is that the client might need to access data across subresources to get data for a dashboard or something similar. For example, a manager might want to get some statistics about orders across all customers.

Before you go ahead and flatten your whole API, there are two alternatives you should consider.

First, remember that there is nothing that prevents you from having multiple URIs that point to the same underlying resource, so beside /customers/{id}/orders/{id}, you could add an extra URI to query orders outside of the customer scope:


To minimize the duplication of functionality, you can limit the top-level /orders URI to only accept GET requests, so if clients want to create a new order, they will always do it in the context of a customer.

Of course, you need to be careful not to duplicate resources unnecessarily, but if there is a customer need, then there is a customer need, and we need to find the best possible solution.

In RESTful Web Services Cookbook, Chief Engineer at eBay (and former Yahoo Architect) Subbu Allamaraju suggests an alternative approach called snapshots.

For example, if an order manager wants to see some specific statistics (5 latest orders, 5 biggest clients, etc.) then we can create a snapshot resource that finds and returns all these information:


I personally like the snapshot approach better, because it doesn’t feel like querying a database. But with that said, the snapshot approach requires an intimate knowledge of the API user, and the extra order top-level resource will offer more flexibility.

4. Use Links for Relationships

Another way to show relationships between resources, without falling back on using keys in an SQL-like manner, is to embed links inside your responses:

  "id": 123,
  "title": "Mr.",
  "firstname": "Han",
  "surname": "Solo",
  "emailPromotion": "No",
  "_links": {
    "self": {
      "href": ""
    "contactDetails": {
      "href": ""
    "orders": {
      "href": ""

A cool thing about links is that they allow autodiscovery by clients! When the client gets the response back then it can see in the _link section what other actions it can follow from here. This is just like when you are surfing the web where you come to a page and then you can follow its links to new pages.

Another nice thing is that clients will have fewer hard-coded links in their code, which will make the code more robust. If the client wants to see the customer’s orders, it can just follow the orders link to get them.

However, there are different opinions about if you should use links, or not…

Vinay Sahni writes in his excellent blog post Best Practices for Designing a Pragmatic RESTful API that links are a good idea, but we are not ready to use them yet. On the other hand, the RESTful maturity model says that when you start using links you have reached the highest level of REST maturity.

So what to do?

Well, Dr. Roy Fielding, an expert on software architectures and the inventor of REST architectural style, flatly said on his blog that if you don’t use links it ain’t REST services, and he kindly encourages you to use another buzz word for your API!

5. Hide Internal Codes

In an earlier post, I incidentally leaked internal codes in the job_id column on the employees table:

  "jobId": "SH_CLERK"

Needless to say, this is an implementation detail that gives away that we are using a relational database, and experienced Oracle users would instantly spot that it’s Oracle’s sample HR schema. This leaking makes it harder to switch to a document-oriented database, like MongoDB, where there is no concept of foreign keys.

But even if the chance of switching to MongoDB is zero, it still makes the response harder to read.

So a better approach is to let the REST API translate the internal code to the human-readable value that the code represents (i.e. “Shipping Clerk”) and then also remove the Id part of the field name.

  "job": "Shipping Clerk"

This version is definitely more readable, but a fair concern is if the service will be slower now that it needs to lookup the value? I used to be an avid reader of Tom Kyte, the Oracle DB expert, and still remember that you should always optimize from measurements. I mean there’s a good chance that the HTTP cache will help us out and make it less of a bottleneck than it appears at first glance.

As a rule of thumb, if performance means everything to you (or you have a lot of lookup fields) then you might consider leaking the internal codes. Otherwise, you should provide a more readable API by hiding them.

6. Translate Automatically

But what about translations? What if you have a multilingual application that has translations of the internal code in multiple languages? How do you handle that?

Simple! You let the API user specify its preferred language in the Accept-Language HTTP header. For example, Accept-Language: da and then the REST API should automatically translate it into Danish:

  "job": "Shippingmedarbejder"

7. Create a Resource for Metadata

So what if the API user needs to show a drop-down list that shows all possible jobs? How will he or she get a complete list of all possible values for the job field?

The easy solution for you is simply to write the list of possible values in your API documentation and then the API user can hardcode them in the drop-down list. But this will lead to fragile client apps that need to be updated when new job types are added, and it just doesn’t feel very web-like to rely on offline metadata.

A more robust solution is to create a metadata subresource that provides list of values and other metadata that are needed when using the resource. For example, a /employees/metadata subresource could provide the API user will all the metadata needed to interact with the employees resource.

This solution is similar to how Atlassian is doing in the JIRA API. If you make sure that the response from the metadata subresource is cached properly it shouldn’t affect performance adversely and you will provide a more flexible API that leads to more stable client apps.

That’s it for now. I really hope that some of these tips will help you design better REST APIs. Thanks for reading and take good care until next time!