Don’t Limit Your REST API to CRUD Operations

I think one of the best things about RESTful web services is the Collection Pattern. It’s a really smart and developer-friendly way of designing a REST service.

For example, the Task REST API below, which is taken from my company’s REST API, is a typical example of how the collection pattern looks:

The collection pattern is so widely adopted that even a REST newbie, who has never seen this API before, will be able to guess the description in the Task column based on the content in the Method and Path columns.

The collection pattern is also really smart from a code perspective, because many frameworks have some sort of Active Record implementation on top of the collection pattern, so the framework can automatically wrap the whole REST service in a convenient way.

For example, in the old AngularJS framework they had this wonderful $resource factory that you could simply give the URI of a REST service that followed the collection pattern – and then the $resource factory would automatically figure out the rest (no pun intended!)

var Task = $resource('/tasks/:taskId', {taskId: '@id'});

var task = new Task();
task.description = "Put a man on the moon.";
task.assignee = "James Webb";
task.$save();

While the collection pattern is really clever and so easy to use – and the best solution for almost all REST services – there are some edge cases where its CRUD approach just doesn’t make sense and other patterns should be considered.

Now it gets controversial…

When the web was the new big thing – and online pet shops were worth hundreds of millions of dollars – you would see web pages with HTML forms like this: 

<form method="POST" action="send_mail.cgi">
    <input type="text" name="subject">
    <input type="text" name="message">
    <input type="submit" value="Send Message">
</form>

While you can long for those innocent days when you could publish code like that on the web and not be flooded with spam; the important thing here is that you can also call the send_mail.cgi script directly and use it as a web service; for example, using the small JQuery script below:

var mail = {
    subject: "Man walks on the moon",
    message: "Armstrong and Aldrin become the first men on the moon..."
};

$.post({
    url: "/send_mail.cgi",
    contentType: "application/x-www-form-urlencoded",
    data: mail,
    success: function() {
        console.log("Mail sent!");
    }
});

Now, I will argue that send_mail.cgi is a RESTful web service (!) even if it’s a really poorly designed one and a simple POST /mails service would have been a lot nicer!

If you finish the demanding, yet satisfying task of reading Roy Fielding’s PhD thesis, which defines the REST architectural style, you will see that it says nothing about limiting our REST services to CRUD operations and it also says nothing about limiting ourselves to the collection pattern…

In fact, Fielding later wrote a blog post about the use of the POST method in REST, and said, “As long as the method is being used according to its own definition, REST doesn’t have much to say about it.”  And if we read the HTTP specification we can see that it doesn’t limit the use of POST to adding new items to collections – and if we read the URI specification we can see that it doesn’t limit our URI naming to plural nouns…

So POST /send_mail.cgi is OK from a specification point of view and can be considered RESTful…

So what are you saying?

So what am I saying? Is this the sacking of Rome and we can now all go crazy with POST /add-new-order.cgi and GET /find-my-orders.xml. No laws! No limits!

Of course not. I still think that the Collection Pattern is the right choice for almost all RESTful web services – and it should be the default choice for any new RESTful web service – because it’s so widely adopted and easily recognizable by most API users.

However, there are edge cases where it makes sense to use other patterns, such as the Controller Pattern. For example, if I have a REST service for rockets then one does not simply just launch a rocket (or walk into Mordor!) because you need to provide launch codes and the rocket needs to go through multiple stages before actual take-off – and this goes way beyond just changing the value of a field in the resource representation. So for this scenario I would add a controller subresource:

/rockets/43/launch-rocket

When breaking with the Collection Pattern I really like the verb-noun naming of the URI, such as launch-rocket. This is without doubts because I read Code Complete way too many times (!) but also because it makes it obvious to the API user that it isn’t a part of the Collection Pattern. On top of that, remember to add a link to controller subresource in the resource representation to make the API user aware that the subresource exists:

{
  "id": 43,
  "name": "Apollo 11",
  "state": "Ready for launch",
  "_links": {
    "self": {"href": "/rockets/43"},
    "launch-rocket": {"href": "/rockets/43/launch-rocket"} 
  }
}

This meaning of this post isn’t to say that the Collection Pattern is bad. In fact, it’s the right choice for almost all REST APIs and everybody will love you for using it! 🙂 The purpose is to say that Collection Pattern != REST and you still have some wriggle room for edge cases that doesn’t fit neatly into the Collection Pattern without losing your API’s RESTfulness or the desirable properties that come with this architectural style.

Team Pattern #4: The Self-Managed Product Team

This is the last post in the blog series about team patterns and it’s all about self-managed product teams.

At first this team pattern looks similar to the product team pattern, but unlike the product team pattern there is no engineering manager directly responsible for the team’s work.

This pattern also looks a little like the matrix team as the people on the team report to a manager outside of the team, but unlike the matrix team there is no common theme for the engineers who report to an engineering manager.

That is, a single manager is responsible for a diverse group of engineers working with different technologies (e.g., C++, Android, Node.js) and different products areas (e.g., billing, orders, customers) — and there’s no guarantee that two Android Engineers will report to the same manager or that all people on the Orders product team will report to the same manager.

In this team pattern, the manager is less of a traditional manager who directs and supervises work — and more of a career counsler and coach who can advice the engineer.

In this team pattern the leadership of a team can be split into three separate roles:

  • Product Manager: This role is focused on product leadership, such as talking with customers and preparing product road maps.
  • Engineering Manager: This role is focused on people management, such as recruitment, career guidance, building culture, finding and fix interpersonal issues.
  • Lead Engineer: This role is focused on technical leadership, such as technical best practices, mentoring, and anticipate the future technical needs of the team.

An real-world example of this team pattern is described by Josh Tyler, who is EVP Engineering at Course Hero, in his fine book Building Great Software Engineering Teams; at his company Course Hero, an online learning website, they’ve organized their teams according to their product areas:

On each product team they have a Lead Engineer who takes care of technical mentoring and guidance, define best practices, and make sure they are followed.
The Lead Engineer works closely with the Product Manager to scope projects, prioritize tasks, and give the team the context necessary to be successful.

Where it starts to divert from the product team pattern previous example is that the engineering manager’s role is independent from the Lead Engineers.
The Engineering Manager is responsible for a group of engineers across multiple product teams. The Engineering Manager is no longer involved in technical leadership and
doesn’t supervise the work of the Lead Engineer, but is focused on people management, such as finding and fixing people-related problems.

The roles of Engineering Manager and Lead Engineer are completely independent. So it is possible for a Lead Engineer in one product team to report to an Engineering Manager.
This engineering manager may be part of another product team — as a hands-on engineer — and report to another Lead Engineer for technical matters on that team.

But what do Course Hero do when they have a person who is both talented in people management and technical leadership? They considers those rare people to be precious gems that should be treasured and see them as people with executive potential. But still then, they recommend that the person starts by focusing on one of the two paths (i.e., Lead Engineer or Engineering Manager) and then master the other one later.

Strengths

The self-organized product team shares many of the strengths we have seen in the other team patterns.

But a distinct strength is that splitting the engineering manager role into two opens up for a genuine two-track career model with a track for managerial leadership
and another for technical leadership. This is important as most first-rate engineers are not really interested in traditional management, but are deeply passionate about technology, so in this
team pattern they have a career path where they grow without losing their technical edge. This will also make it easier to attract and retain highly skilled technical people
as you don’t expect them to become a traditional manager (and lose that all-important technical edge) or force them to report to a incompetent manager who overrides
their technical decisions due to authority, and not merits.

It may also make hiring easier, which should not be underestimated in a competitive job market, as finding a single candidate who excels in both technical leadership and people management is really tough.

Weaknesses

At first sight the clear definition and separation of product, technology, and people leadership roles look attractive — as it makes it real easy to figure out who to ask:

  • Should you upgrade to the latest version of Angular? Ask the Lead Engineer!
  • Do you dream of becoming a Full-Stack Developer? Talk with your Engineering Manager!
  • What’s the most important feature to work on right now? Ask the Product Manager!

The risk is that in real life many problems doesn’t fit neatly into one of these categories…

For example, the CEO thinks that the team is not developing fast enough:

  • Is that a product thing? Maybe the team is focusing on low-value, low-visible features?
  • Is it a engineering thing? Maybe there is too much technical debts or poor design?
  • Is it a people thing? Maybe the people on the team lack skills in the technologies used or their motivation is low?
  • Is it a mix of these? All of these? Some of these?

This will make the debugging of the problem harder — as the debugging may spans multiple areas of responsibility owned by different people — it will make finding and fixing the problem take longer.

From a manager perspective: The primary weakness is that the manager is often no longer involved in engineering, neither as a lead nor as a individual contributor, and if this is the case then his or her engineering skills can quickly deteriorate as things are evolving pretty fast within computing these days (it’s a Red Queen’s race!) and the manager is in high danger of fast becoming a dinosaur that suggests solutions that made perfect sense… on mainframes! Or thinking that ASP Classic and React are more of less the same… I mean they are just web development frameworks, right? 😉

The knock-on effect of losing one’s technical instinct is that it can lead to poor judgement.

Of course, these risks can be mitigated — especially with good collaboration and some general curiosity about what’s happening in one’s field — but left unattended they may turn into issues.

That’s it! I hope you enjoy this little blog series on team patterns!

Team Pattern #3: The Product Team

It’s time for the next episode in this blog series on team patterns!

It has been slightly delayed, because I got a last-minute invite to present at TCC 2018 (our annual conference for customers and partners) and to host a roundtable there as well. Both great experiences 🙂 You can see a nice photo on LinkedIn where I’m rehearsing together with Christian (our CEO).

But enough about my everyday work and back to the blog series 🙂

This post is about the product team pattern!

At first sight a product team is very similar to a matrix team. You organize a team around a product area and take all the different roles needed to build the product and put them inside this team.

The big difference from a matrix team is that all team members, regardless of their role, report to the same line manager. So it doesn’t matter whether a team member is a designer, frontend developer, backend developer; everybody on the team reports to the same line manager.

The motivation for doing this is to simplify decision making (i.e., the buck stops at the line manager regardless of the functional area) and to encourage people on the team to learn more about the business that the product area serves (as they will work within that area for a long time, rather than on the matrix team, where they are more likely to be moved to a completely new product area).

When Instagram moved away from the technology team patterns, which we saw in an earlier post in this series, they adopted the product team pattern, as you can see in the org chart below:

As you can see in the org chart above, Instagram organized their team around the product areas (such as content creation). To handle general stuff that doesn’t fit neatly into a product area, they added two platform teams: The Core Client team for developing the container (or app shell) that the product teams develop their product areas within, and a Core Infrastructure team that handles servers and other infrastructure.

At Airbnb they have made an interesting variation on this pattern: They let their product teams focus on a specific persona, such as guest or host, instead of more traditional product areas, such as billing or booking. This makes it really easy for Airbnb to establish business-related KPIs for the team.

The line manager in this team pattern is often called an engineering manager to show that it is not a manager for a specific technology area (such as mobile) or a specific discipline (such as QA), but rather a manager responsible for all engineering within a product area.

The product team pattern tends to grow leaders who can bring different disciplines together and make them build a unified product where all the pieces fit nicely together. And it encourages leaders to focusing on building a product that actually solves a business problem. This is also the motivation for many organizations who use this team pattern: It aligns the engineering teams’ success much more closely with the company’s success and it becomes much easier to define a business-related KPI for the team (compared to the technology team pattern).

Another interesting dynamic is that companies, which move away from technology teams and to product teams, is that full-stack developers with a good understand of the product area tend to replace the technical specialists (e.g., experts in one layer of the tech stack) as the rock stars of the development organization.

The reason is that a specialist can only typically build partial features. For example, a Django developer who can develop the backend functionality, but doesn’t not know React so cannot finish the frontend part of the feature.

But the developer will report to a line manager responsible for the whole product area (and not just a single technology) and who is motivated towards shipping whole features, and hence, more likely to reward people who can actually deliver whole features.

This dynamic is further accelerated if the company uses continuous deployment and is in a business where speed to market matters (which is most businesses).

Strengths

An advantage of a product team over a technology team is that the team is much closer aligned with business success. The team will not feel like a success if their product has just flunked a major public review — even if their code is so beautiful that it would make Jon Bentley shed a tear.

In continuation of this, it is also my experience that more and more software engineers no longer really fit the old computer geek stereotype, who just wanted to be left alone and code. Now they also want to see their product succeed in the market and make meaningful contributions towards that. As in the old parable about the three stonecutters, they are no longer satisfied with merely cutting stones, they want to build grand cathedrals!

To an even higher degree than with the matrix team, the product team is more likely to build a unified product and have better collaboration across disciplines and lower the risk of a “us-versus-them” culture.

Another advantage of the product team over the matrix team is that decision making becomes much easier as there is only a single line manager involved when a decision needs to be taken or impediment needs to be escalated.

All this usually really contribute to the team’s ability to iterate very fast and launch new product features quickly; and contributes positive to the team’s autonomy.

A caveat is that many of these strengths may be nullified if the team has strong dependencies outside the team’s control. These dependencies could be organizational (like external reviews or approvals) or technical (like an architecture that is a big ball of mud and any change to the codebase can have side effects anywhere else so all teams must coordinate their work).

Weaknesses

A serious risk with product teams — especially compared to technology teams — is that they may pay less attention to technical excellence.

There can be several reasons for this:

  1. Engineers may become too focused on market success at the expense of engineering excellence. Especially if there is a strong and opinionated product manager or customer.
  2. Each team may become a silo (reporting to a single line manager) and there can only be so many senior engineers within a single team. So, for example, if the team only has a single junior Android engineer who will make sure that the quality of his or her code is satisfying? Or making sure he or she knows the best Android blogs to follow?

Similarly, there is also a higher risk of code duplication. That is, several product teams may code the same functionality in their individual codebases (which may or may not be a problem depending on your belief system).

Another risk compared to matrix teams is that it may become more difficult to move people between teams. When an engineer changes team in the matrix, he or she continues with the same line manager. But in product team pattern, he or she will change line manager as well, which may be a major change. So there may be more resistance, both the engineer might like his or her current line manager and not want to change and start all over with a new manager. Plus some manager have empire tendencies and may be less willing to “give away” an engineer for good. The consequence of this may be that the company is not allocating its people to its highest priorities or biggest opportunities.

Another risk compared to technology teams is that recruitment may be tougher. It is easier to explain to an Angular engineer that it would be great to be part of an Angular team compared to being part of a Life Insurance product team. Even with the matrix organization you can tell the engineer that he or she will report to a manager well-versed in that area. You can mitigate this by explaining why this area is interesting from a technical point of view or important to society at large.

Some companies brand their product teams (at least in job ads) as full-stack team, and given it is cool to be a full-stack developer, the thinking is that it will be easier to recruit people for a full-stack team compared to a Life Insurance team.

From a manager perspective, my experience (as having been both a development manager and an engineering manager) is that being an engineering manager is a more demanding job. It is not because the job is difficult from a technical point of view, it is just that the responsibility is broader. You will essentially become a mini VP of Engineering for a small development department.

You will also have a more direct impact on the business, which you cannot shy away from. That is, as a development manager (for a technology team) you can say that the product is perfect from a technical point of view and it is not your fault that it cannot sell. Due to the broader scope of the role there are also many more things that can go wrong. You will be responsible for things outside of your primary area of expertise but will be responsible for them anyway.

Finally, as a manager your technical skills will most likely erode faster than in the other team patterns. You will be responsible for multiple technologies, such as backend, frontend, data — and have people management on top of that. The rapid pace of technological progress only accelerates this; for example, you were an expert in AngularJS and then they release Angular 2 and all your hard-earned skills become obsolete. And this is not only happening in one layer of the stack, but on all layers, so keep up with everything can become pretty tough.

Given this broad scope of responsibility for the engineering manager — from people management to technical leadership — some companies keep the product team but split the engineering manager role into two: One person will be responsible for technical leadership (i.e., a lead engineer) and one person will be responsible for people management (i.e., a people manager).

This pattern, where the engineering manager role is split into two, will be the topic for the next post in this blog series. Stay tuned!

Team Pattern #2: The Matrix Team

This is the third post in my blog series on team patterns, and this time we will look at the matrix team pattern.

A matrix team is a temporary product (or project) team, which is made up of specialists from different functional areas. The idea with the cross-functional nature of the team is to increase collaboration between different functions to create better products and faster releases.

An old-school example of this team pattern is Microsoft Solutions Framework (MSF), which was hot back when Microsoft ruled the (software) world:

A feature team is usually focused on a product area and will typically last for the duration of a product release or longer. For example, in Microsoft you would have multiple feature teams working on Microsoft Excel, and one of them would focus on Excel Macros.

An example of the members of a feature team could a Program Manager (responsible for functional specifications and project management within the team), 4 developers, and 2 testers. The Program Manager will tell the developers and testers which is the most important feature to work on and handle the planning and coordination of the feature team’s work.

However, developers report to a development manager who provides guidelines on how they should do their job (e.g., job descriptions, development process, engineering practices, coding standards) and be responsible for people management (e.g., promotions, training, move to a new team, etc.) — and it is the same for the other roles, so program managers report to a group program manager, and testers report to a test manager.

The underlying idea is that it will encourage cross-functional collaboration when specialists are literally on the same team, but the specialists will continue to report to a functional manager who is an expert in their area of expertise.

Tuning the Matrix

The most common parameters to tune in this team pattern are the influence of the line manager on the team’s work (degree of team autonomy) and the duration of the matrix team (short-lived versus long-lived).

Yammer: Very short-lived matrix teams

A modern variation of this team pattern is used by Yammer, a social network for enterprises. A key difference between Yammer and Microsoft is that Yammer continually deploy new features to production and don’t have major product release like Microsoft used to have for their shrink-wrap products.

Yammer’s developers share the ownership of the entire codebase, so there is no thing such as “my code” or “my module”. Each time a task needs to be performed on the codebase, they establish a temporary team for this specific task. When the task is done the code is released to production and the task force is disbanded and developers are free again to join new ad-hoc team to address new tasks.

The thinking is that this is highly agile, and people will not be limited to work on a single product area but can quickly go wherever there is the biggest need for them.

The development manager is responsible for developers within his or her technical area, such as Ruby on Rails, Java, or React. But the development manager is no longer defining guidelines for how the developers should work, but instead a coach who is focus on turning his or her developers into top-notch experts in their given technology.

Spotify: Long-lived, autonomous matrix teams

At Spotify, an online music player, they take a different approach and encourage long-lived, stable matrix teams (which they call “squads”). Their reasoning is that it takes a long time to master a product area, such as Spotify Radio, and mastery is needed to build an awesome product for their users.

They also empower their matrix teams and give them greater autonomy than many more traditional matrix organizations.

However, Spotify still have line managers (which they call “chapter leads”), but with the important twist that the line manager is also an active member of a matrix team (for example, as a back-end developer) to make sure he or she stay in touch with reality.

Strengths

The primary strength of the matrix team (compared to the technology team pattern from the previous post) is that it fosters much closer collaboration across functional disciplines. Now the developers and testers are part of the same team; especially, if the program manager successfully makes a shared vision for the product area.

On top of that being a cross-functional team means that all the necessary skills will be immediately available within the team, so there won’t be these gaps between the teams which we saw in the technology team pattern. This improves time to market:

It also encourages developers and testers to learn more about the business domain, such as banking or medicine, that the matrix team is working in. This is really useful when it is a highly complex business domain with lots of counter-intuitive business rules. For simple domains, such as social networks or blogging, it may be less important.

It also brings engineers closer to the business and makes it easier to see how they contribute to the success of the company, while they can still continue to seek mastery within their chosen technology and continue to report to a line manager who appreciates and understands their technical work. The line manager also enforces alignment and quality across teams, and the engineers will have a second opinion (and supporter) in case of a powerful and persuasive program manager on the team or they feel uncomfortable with the decisions taken within the team.

The matrix organization also scales well and can be used for delivering very large products. Microsoft released many of their greatest hits, such as Windows and Microsoft Office, using this team pattern. They were even able to compete with young startups, such as Netscape, while using this model. Obviously, Microsoft used some dirty business tricks to win the browser war against Netscape, but they would not have been able to compete with Netscape if they had not been able to keep up with Netscape’s development speed.

Weaknesses

In theory, a matrix team has a high degree of autonomy, but in practice multiple line managers will often enforce controls that limit the team’s autonomy and the team will need to consult with the line managers before trying anything too radical.

There is also a risk that work process inside the team will turn into small waterfalls with extensive handovers between the disciplines inside the team. This can happen when the line manager is not actually part of the team but defines the process that his or her people must follow within the team.

There is also a risk that the line managers may not see the big picture and start to suboptimize for their functional area. For example, the test manager wants to introduce NASA-like quality controls, while the customers are actually happy with the current quality level and is much more interested in getting new features quicker (at the current quality level).

Many developers who worked in a matrix team feel like they have two managers (i.e., the development manager and the program manager) and they often receive conflicting signals about what is important. For example, the program manager says that the developer can skip the unit testing to meet the deadline, but the development manager says that unit tests must be written for all new code — and the developer is caught in the middle. Unclear or overlapping responsibilities are a frequent source of conflict and frustration in a matrix organization.

Decision making related to how the team works may even turn into a lengthy process as multiple line managers may need to be involved in a single decision. For example, the development manager wants to introduce static code analysis (and pay the technical debts it reveals), which should be a pure development activity. But the program manager feels that it will delay the development activities already on the team’s roadmap, so she wants to be involved. The test manager feels that it is an initiative related to quality, and hence, he should have a say in it, and incorporate it as part of an overall test strategy.

To address these weaknesses, some engineering organizations introduced product teams where they would continue to organize teams around product areas to harvest the benefits of cross-functional teams, but to boost the team’s autonomy and speed up decision making, they decided to drop the matrix organization (with multiple line managers) and instead have the all people on a product team — regardless of their specialties — report to the same line manager.

We will look at this pattern in the next post in this blog series on team patterns. Stay tuned!

Team Pattern #1: The Technology Team

This is the second post in my blog series about team patterns, and it’s about the technology team pattern.

A technology team is focused on a technical area, such as front-end or back-end, and all members of the team are specialists in that area. So there will be no product managers, QA specialists, or engineers specialized in other technologies on the team. All members of the team report to a manager who is also a specialist in this particular technical area.

For example, it could be a mobile team with a number of mobile developers reporting to a mobile development manager:

When there are multiple technology teams, such as a mobile team and a back-end team, there is usually very little (if any) code shared between the teams. This is because the code is often written in different languages/framework (e.g., Swift versus Python) and exists in different code repositories. The interaction between the teams will often happen through REST APIs or similar.

A real-world example of this team pattern is early Instagram (around 2015) where they had split their engineering organization into three development teams (they eventually moved away from this setup, but more about that in a later post):

The reporting lines in such an organization is based on expertise. In other words, a mobile developer will not report to back-end development manager, but to the mobile development manager.

The managers in such an organization is likely to be a senior engineer who has been promoted into management, and now also takes care of people management on his or her team. The manager may still be writing code (or at least have the ability to do so). It is also likely that the manager will handle project management for the team’s work and coordination with other teams.

The rock star in such an engineering organization is likely be a technical specialist measured by some technical standard, such as mastering all the advanced features of the chosen language, or writing the most elegant, concise code.

Strengths

The primary strength of the technology team is its technical mastery, which is likely to be higher than in any of the other team patterns which we will explore in the following posts in this blog series.

The team’s codebase is likely to be of a high quality and take full advantage of the latest advancements within the chosen technology — and there is likely to be little technical debts.

It can also be easier to recruit technical experts for such a team. For example, if an engineer is intensely passionate about Django then the idea of working in a Django team, reporting to a Django manager and being surrounded by Django engineers is pretty attractive by default.

Finally, the team’s manager is likely to be highly competent in the work that the team performs, which is key to high job satisfaction according to recent research. This also means that the manager can evaluate the engineers based on merit, and not some measure which can be easily faked (like who stays the longest in the office). The manager will also be able to provide detailed coaching, such as how to write better code and be aware when an engineer is ready to be promoted to the next level.

Weaknesses

A common problem for engineering organizations which use the technology team pattern is that time to market (for new features) tends to be slow.

The reason is that one team may finish its part of the feature fast, but the next team might be busy with something else and cannot pick up the task any time soon — as shown in the diagram below:

This means that work queues up between the teams. The cost of a feature may not be higher in actual development time, but it can be very expensive in calendar time.

This problem should not be underestimated. Time to market is hugely important for most businesses. In Lean thinking, unfinished features are expensive inventory that cost the company money, because they don’t generate business value until it’s in production and being used by the end-user.

This team pattern will also nudge you towards phased (or waterfall) development where each team finishes its part of the work before passing it on to the next team. This will discourage iterative development and feedback loops — and when mistakes start to occur due to limited communication between the teams, the handovers are likely to become elaborate and time consuming — and it can even lead to a destructive “us-versus-them” mentality between the teams.

In an attempt to overcome the weaknesses of technology teams, some engineering organizations introduced cross-functional matrix teams. The thinking was that this would lead to improved collaboration between functions (e.g., product management, development, testing) because they would now literally be the same (matrix) team. On top of that, the expectation was also that time to market would improve, because when a matrix team takes on a new feature it will have all the necessary skills inside the team to finish it!

So in the next post in this blog series, we will explore the matrix team pattern. Stay tuned!

Team Patterns: How to Structure an Engineering Team?

This is the first post in a blog series about how top-performing software companies are organizing their engineering teams.

You don’t need to network a lot before you realize that software companies are organizing their teams in very different ways.

But after you’ve heard about a few dozens companies, you start to detect patterns. You start to realize that even though there are lots of small variations then their team structures can all be boiled down to a handful of general patterns.

In my experience there are four general team patterns that most companies follow. Yes, they have tweaked them to fit their circumstances, but the overall idea behind the pattern remains the same:

  1. Technology Team: The team is formed around a technology, such as Android. For example, a team of mobile developers who build and maintain a mobile app.
  2. Matrix Team: The developers report to a Development Manager, but they are “lend out” to cross-functional product or project teams where they do their daily work.
  3. Product Team: The team is oriented around a product area, such as billing. It’s cross-functional, but all people on the team, regardless of their specialization, report to the same line manager.
  4. Self-Managed Product Team: The team is oriented around a product area. But the management of the team is divided into technical leadership, typically handled by an Engineering Lead on the team, and people management, typically handled by an Engineering Manager outside the team.

My plan is that each post in this blog series will explore one of these team patterns in depth to identify its strengths and weaknesses — and then spice the whole thing up with plenty of examples of top-performing companies who are actually using the team pattern in the real-world.

The blog series will continue over the next month. I’ll add links to the posts in the series below when they have been published:

  1. Team Patterns: How to Structure an Engineering Team?
  2. Team Pattern #1: The Technology Team
  3. Team Pattern #2: The Matrix Team
  4. Team Pattern #3: The Product Team
  5. Team Pattern #4: The Self-Managed Product Team

You can subscribe to my blog to automatically get an email when the next post is published.

If you have any questions or comments to this blog series, feel free to send me an .

How to Model Workflows in REST APIs

RESTful Web Services are awesome for performing basic CRUD operations on database tables, but they get even more exciting when you realize that they can also be used for modeling workflows and other advanced stuff.

To show this point, let’s take the blog post workflow below and expose it as a RESTful Web Service:

There are lots of ways you can implement this, so in the following sections we will try three different approachs and take a look at the pro’s and con’s of each of them.

Let’s get started!

1. Use an Attribute for the Workflow’s State

The easiest way to model the blog workflow is just to store the workflow state as an attribute on the blog post resource:

{
  "id":54301,
  "status":"Draft",
  "title":"7 Things You Didn’t Know about Star Wars",
  "content":"George Lucas accidentally revealed that…"
}

If a client wants to move the blog post to a new state, it just updates the status attribute to the desired state.

It’s an easy solution and the popular blogging software WordPress is basically using this approach in their REST API.

But when you start to dig deeper into it, you realized that it comes with some serious drawbacks.

The first is that the front-end engineer who writes the client code is forced to search through the API documentation to see what values can be used in the status attribute. This conflicts with the idea that REST Services should be self-describing and not relying on out-of-band documentation.

But this drawback can be fixed by adding a metadata service where the client can get a list of all legal values for the status attribute.

A more serious drawback is that the engineer also needs to look in the API documentation to see what workflow transitions are possible (i.e. can you jump directly from Draft to Published?) and code all these workflow rules in the client code.

This means that business logic is leaking into the client, so if there are many different types of clients (mobile apps, websites, etc.) then each client will be forced to re-implement the workflow logic in their own code, which is not a cost-effective way to do software development.

But even worse, it breaks the fundamental software engineering principle of Don’t Repeat Yourself (DRY) and it violates the separation of concerns between the client and server, which makes it even harder to maintain and evolve the software.

2. Use Hyperlinks for Workflow Transitions

So what should you do if you have transition rules in your workflow, but you don’t want all the bad stuff I mentioned in the previous section?

An alternative approach is to model each workflow transition as a subresource and let clients to use HTTP’s POST method on these subresources to perform the transition. This is inspired by the action pattern in PayPal’s API Standards.

On top of the subresources, you add hyperlinks in the response to let the client know what workflow transitions are possible in the current state.

So with this approach, the response will look like this:

{
  "id":54301,
  "status":"Draft",
  "title":"7 Things You Didn’t Know about Star Wars",
  "content":"George Lucas accidentally revealed that…",
  "_links": {
    "sendToReview": {
      "description":"Send to Review",
      "href":"/posts/54301/review"
    }
  }
}

The smart thing is that the _links section is automatically updated to show what workflow transitions are available in the current state. So in the example above, you can see that the blog post is in the Draft state, and from there you can make the Send to Review transition to move the post to the Review state.

So if you call POST /posts/54301/review, you move the blog post to the Review state, and then the server will update the _links section to show what workflow transitions are possible in this new state:

{
  "id":54301,
  "status":"Review",
  "title":"7 Things You Didn’t Know about Star Wars",
  "content":"George Lucas accidentally revealed that…",
  "_links": {
    "publishPost": {
      "description":"Publish Post",
      "href":"/posts/54301/publish"
    },
    "rejectPost": {
      "description":"Reject Post",
      "href":"/posts/54301/reject"
    }
  }
}

The benefit of this solution is that clients no longer need to implement workflow logic in their own code, which means that business logic is no longer leaked into them.

It also reduces the risk of poorly constructed links in the client code — which is a frequent cause of defects in REST clients — because the clients get the links from the server.

Another really cool thing is that if the client needs to display a Next Action menu, it can simply loop through the values in the _links section and use them as menu items.

Finally, it also fits nicely with REST’s goal of self-discovery and HATEOAS.

The drawback compared to the approach in the previous section is that the response is bigger, because it includes the _links section. Another drawback is that the interaction between the client and server has become a little more chatty, because you now need two requests if you want to update a blog post and send it to review.

But I think both of these drawbacks are pretty minor compared to what you get out of it.

A more serious concern is that if you have an advanced workflow, you might end up with a massive number of subresources (i.e. one for each workflow state), which might look a bit messed up.

Another concern is that the server will need to know all states at design time to create subresources for them. This won’t be a problem for most workflows, but if you offer sophisticated workflow functionality where users can customize the states and transitions to fit their special needs it could be problematic.

3. Use a Subresource for Workflow Transitions

So how do you model customizable workflows?

For inspiration, let’s take a look at the issue-tracking tool JIRA, which allows (admin) users to configure their own workflows at runtime. How do they expose this in their REST API?

On their issue resource, they added a transitions subresource where the client can get a list of possible transitions from the issue’s current state. The client can then take one of these possible states and make a POST to the same subresource to transit to that state.

I like their approach, but think it’s a little naughty that they use the same subresource for two different things (i.e. list potential states, and change the current state).

So to use this approach for our blog post workflow, you can add a subresource with potential transition changes (you could also call it “transactions“ or “actions“ depending on your preferences):

GET /posts/{id}/availableTransitions

For a blog post in the Review state, it will return something like this:

[
  {"transition":"Publish Post"},
  {"transition":"Reject Post"}
]

If you want to do a transition, you grab one of the possible transitions from the array and POST it to the transitions subresource:

POST /posts/{id}/transitions

{
  "transition":"Publish Post"
}

A cool thing about this approach is that if you need more advanced workflow transitions, you can add more attributes to the transition subresource. For example, when you transit to the Review state, you might also want to specify a reviewer and a comment:

POST /posts/{id}/transitions

{
  "transition":"Send to Review",
  "reviewer":"Han Solo",
  "comment":"Plz review this faster than you did the Kessel Run!"
}

Another interesting possibility is that if the history of transitions is important (for audit), you could enable a GET method on the transitions subresource to get a full list of all transitions that have been performed on the resource.

You could also decide to execute the transitions asynchronous by returning a 202 Accepted status code and a link where the client can poll the latest status. This could be useful in a money transfer between banks where the actual transfer happens in a nightly batch.

Prakash Subramaniam even goes as far as playing with the idea that you should drop PUT all together, and only allow changing resources through a transition subresource. The good thing is that it neatly separates the interface into a query and command part (as per the CQRS pattern) and you have a strong audit trail of what has happened to the resource.

The drawback is that there are (many) scenarios where it’s total overkill to perform a transaction. For example, to edit the title of a blog post. A live blog post editor would probably end up doing so many transactions that it would overwhelm any kind of history log. But for something like a bank account, it makes good sense to make each update inside a transaction, so you have a complete audit trail.

So which one should I pick?

So we came to the unavoidable question: What approach is the best one?

As always, it depends on the context, but here are some quick guidelines:

Approach When…
1. State Attribute When there are no restrictions on the transitions. You can go from any state to any state at any time. The states are basically nothing more than a list of values.
2. Transition Links There are limits to which states you can go to depending on the current state.
3. Transition Subresource The workflow is configurable by users, so states and transitions among them are not fixed, but can be changed at runtime.

So use the Einstein rule (i.e. make things as simple as possible, but not simpler) and start with the first approach, and only consider number two or even number three if there is an undeniable need for them.

That’s all for now. Thank you for reading!

4 Must-Read Articles on Developer Experience (DX)

We live in an API Economy where APIs are more important to business success than ever before. Companies are digitalizing their businesses at a breathtaking rate, and they use APIs to integrate customers and partners into their new digital business processes.

A result of this is that Developer Experience (DX) — which is all about using User Experience (UX) techniques to make life easier for third-party developers calling your public APIs — is fast becoming essential to remain competitive in the digital economy.

But how in the world do you apply UX techniques, such as personas, prototypes and usability testing, to the developer experience?

I personally think this is a really exciting topic and I’ve read hundreds of good articles about the topic, but if I had to choose the best of the best, I would pick these four:

  1. Why API Developer Experience Matters More Than Ever — This is the best intro that I have seen to developer experience.
  2. User Personas for HTTP APIs — A collection of example personas for a REST API. It is really an eye opener to see how many different types of users there can be of a single API, and how different their needs are.
  3. Patterns of Developer Experience — Great post about the principles and patterns for effective DX. The DX Pattern collection at the end of the post is a must-have reference library.
  4. Building Effective API Programs: Developer Experience (DX) — If you want to go all the way and create a Developer Program then this article lists what such a program should include and gives an example of what it looks like in the real world.

Happy reading!

Creating a Web App in Angular 2.0

Angular is a highly popular framework for developing dynamic web applications. It’s developed and maintained by Google and a community of open-source developers.

The framework greatly simplifies front-end development work, increases developer productivity, and it’s even fun to program in!

What’s so different about Angular 2.0?

The next major version of Angular will not just be an upgrade, but a complete rewrite of the entire framework, and it will not be backwards compatible with previous versions of Angular.

It’s never a popular decision to break backwards compatibility, and the Angular community has not hold themselves back from telling Google this… repeatedly!

So why did the “don’t be evil” folks at Google decide to do it anyway?

The short story is that the web has evolved a lot since Angular was conceived in 2009, so there are a lot of new web standards (e.g. ECMAScript 6, TypeScript, Web Workers, Web Components) that Angular v1 is not using…

So instead of trying to force these new standards into an old framework, the Angular team thought it would be better to redesign the framework from scratch to truly embrace the new standards and reap all the benefits (e.g. speed of development, better support for large code bases, faster applications) and at the same time also rethink some of the less attractive elements of the framework (e.g. Angular v1 has 5 different ways to model a service and nobody really understands why).

Goodbye MVC! Hello Components!

One of the really big changes in Angular 2.0 is that it’s no longer based on the MVC architecture, but has moved on to a component-based architecture.

What does that mean?

In a component-based architecture you vertically divide your application into (UI) components. For example, in Facebook the Timeline could be one component, and the Chat Sidebar could be another one.

The idea is that each component contains all the stuff you would normally put in the different parts of the MVC.

Components can also be nested. So the Timeline component could be a root component, and then under it there could be a component for showing the posts on the timeline, and another component for showing a box to post new messages to the timeline.

Beside components, there are also modules for grouping a number of related components. For example, to represent a functional area within your application.

Finally, there are services (not to be confused with web services) that provides advanced functionality to the components. For example, handling communication back and forth with a REST Service.

Our Simple App

To get some hands-on experience, I thought it’d be fun to re-implement the Movie App in Angular 2.0. I have previously implemented the same app in Oracle JET, and Sandeep Panda originally coded it in Angular 1.3.

The Movie App is a simple CRUD web app where you can maintain your movie database. While it’s obviously a demo app, which could easily be beaten by an Excel spreadsheet, I think that for demo purposes it has several interesting features:

  • A Single Page App (SPA) with multiple states (i.e. pages).
  • Reuse of functionality across states.
  • Passing parameters between states.
  • Integration with RESTful web services.

You can see a screenshot below of the Movie App implemented in Angular 2:

I have used Twitter Bootstrap for the (minimal) UI. The simple reason is that Material Design (which is Google’s CSS/UI framework) for Angular 2.0 is still in alpha state, which is still a bit too early for me… Yes, I’m a chicken 😉

Getting Started

I used Angular 2.0 Release Candiate 5, which was the latest version at the point of writing.

For writing the code, I used the (free) Visual Studio Code editor, because it has excellent support for TypeScript, which is the default language for Angular 2.0.

If you want to run the Movie App example from this post, just follow these steps:

  1. Download and install Node.js and NPM (if you don’t already have them). I’m using Node.js 4.5.0 and NPM 2.15.9
  2. Download Angular2-movie-app.zip and extract it to a local folder.
  3. Open a command prompt and go to the angular2-movie-app folder.
  4. Run npm install in the command prompt.
  5. Run npm start in the command prompt.

And the Movie App should be automatically opened in your browser.

Designing the Movie App in Angular 2.0

The code structure of the Movie App is shown below:

/app
  main.ts
  movie.component.ts
  movie.module.ts
  movie.routing.ts
  /movies-overview
    movies-overview.component.ts
    movies-overview.component.html
  /movie-creator
    movie-creator.component.ts
    movie-creator.component.html
  /movie-editor
    movie-editor.component.ts
  /movie-viewer
    movie-viewer.component.ts
    movie-viewer.component.html
  /shared
    movie.ts
    movie-data.service.ts 

The app is placed in a folder called app and inside it there’s a main.ts class as the entry point to the application.

There is also the movie root component (movie.component.ts), a routing file (movie.routing.ts) that handles navigation between the components, and a module (movie.module.ts) for storing the components.

There are four subfolders for components (i.e. /movies-overview, /movie-creator, /movie-editor, and /movie-viewer).

The /shared subfolder is for stuff that’s used by several components. In this case, it’s the movie.ts file, which is a class that represents a movie, and movie-data.service.ts, which is a service class that handles communication with the REST service.

The Movie Class

The first step in creating the Movie App is to create a Movie class (movie.ts):

export class Movie {
  _id: number;
  title: string;
  releaseYear: string;
  director: string;
  genre: string;
}

This step was not needed in Angular v1, but the benefit of explicitly defining the class (and its properties) is that it gives the IDE the information it needs to provide auto-completion, compile-time checking and other cool stuff.

Calling a REST Service

The next step is to create the MovieDataService class (movie-data.service.ts), which handles communication with the Movie REST Service.

import { Injectable } from '@angular/core';
import { Headers, Response, Http } from '@angular/http';

import 'rxjs/add/operator/toPromise';

import { Movie } from './movie';

@Injectable()
export class MovieDataService {
  private moviesUrl = 'http://movieapp-sitepointdemos.rhcloud.com/api/movies';

  constructor(private http: Http) { }

  getMovies(): Promise {
    return this.http.get(this.moviesUrl).toPromise().then(response => response.json() as Movie[]).catch(this.handleError);
  }

  getMovie(id: number) {
    return this.getMovies().then(movies => movies.find(movie => movie._id === id));
  }

  private post(movie: Movie): Promise {
    let headers = new Headers({'Content-Type': 'application/json'});
    return this.http.post(this.moviesUrl, JSON.stringify(movie), {headers: headers}).toPromise().then(res => res.json().data).catch(this.handleError);
  }

  private put(movie: Movie) {
    let headers = new Headers();
    headers.append('Content-Type', 'application/json');

    let url = `${this.moviesUrl}/${movie._id}`;
    return this.http.put(url, JSON.stringify(movie), {headers: headers}).toPromise().then(() => movie).catch(this.handleError);
  }

  delete(movie: Movie): Promise {
    let url = `${this.moviesUrl}/${movie._id}`;
    return this.http.delete(url).toPromise().catch(this.handleError);
  }

  save(movie: Movie): Promise {
    if(movie._id) {
      return this.put(movie);
    } else {
      return this.post(movie);
    }
  }

  private handleError(error: any) {
    console.log('An error occured: ', error);
    return Promise.reject("error message: " + error);
  }
}

The code itself is pretty straight forward. It provides some CRUD methods to call the REST service, and each method returns a promise, so service calls can be asynchronous.

But I really missed the nice $resource service in earlier versions of Angular where we got the same functionality in just a few lines of code:

angular.module('movieApp.services', []).factory('Movie', function($resource) {
  return $resource('http://movieapp-sitepointdemos.rhcloud.com/api/movies/:id', { id: '@_id' }, {
    update: {
      method: 'PUT'
    }
  });
});

It’s not a big deal to write the service class by hand, it just makes the road towards the code you actually want to write a little bit longer. But hopefully a $resource replacement is on the way for Angular 2.0.

Develop a Component

Now that we have the Movie and MovieDataService classes ready, we can start developing components.

Let’s take a closer look at MovieCreatorComponent (Movie-Creator.component.ts), which is used for adding new movies to the app.

But before we dive into the code, let’s have a quick look to see how the component looks from a UI perspective:

From a technical point of view, the component is just a class with the @Component decorator (kind of like annotations in Java):

import { Component } from '@angular/core';
import { Router } from '@angular/router';

import { Movie } from '../shared/movie';
import { MovieDataService } from '../shared/movie-data.service';

@Component({
  templateUrl: 'app/movie-creator/movie-creator.component.html'
})
export class MovieCreatorComponent {
  movie: Movie = new Movie();

  constructor(private router: Router, private movieDataService: MovieDataService) { }

  saveMovie() {
    this.movieDataService.save(this.movie);
    this.router.navigate(['/movies']);
  }
}

In the constructor, we say that a router (for navigating to other components) and a movieDataService (for calling the REST Service) should be injected into the component, and stored in two private variables, which we don’t need to declare explicitly.

In the saveMovie methods, we save the movie using the REST Service, and navigates back to the movie overview.

In the @Component decorator, we use the templateUrl property to say what template should be used for the component.

You can see the content of movie-creator.component.html below:

<div class="form-group">
  <label for="title">Title</label>
  <input type="text" [(ngModel)]="movie.title" class="form-control" id="title" placeholder="Movie Title Here"/>
</div>
<div class="form-group">
  <label for="year">Release Year</label>
  <input type="text" [(ngModel)]="movie.releaseYear" class="form-control" id="year" placeholder="When was the movie released?"/>
</div>
<div class="form-group">
  <label for="director">Director</label>
  <input type="text" [(ngModel)]="movie.director" class="form-control" id="director" placeholder="Who directed the movie?"/>
</div>
<div class="form-group">
  <label for="genre">Movie Genre</label>
  <input type="text" [(ngModel)]="movie.genre" class="form-control" id="genre" placeholder="Movie genre here"/>
</div>
<div class="form-group">
  <input (click)="saveMovie()" type="submit" class="btn btn-primary" value="Save Movie"/>
</div>

The template is basically HTML with a couple of Angular extensions:

We use [(ngModel)] to bind an HTML input field to a property in the component class, so that the property in the component class is automatically updated when a user enters something in the input field.

For example, [(ngModel)]=”movie.title” will bind HTML input field to the title of the movie property in the component class, so when a user enters a title it is automatically stored in movie.title property.

In the same way, we use (click) to bind an HTML button to a method in the component class.

For example, (click)=”saveMovie()” makes sure that when a user clicks the “Save Movie” button then the component’s saveMovie() method will be automatically called.

In my opinion, the component approach feels really nice to work with, but it’s hard to explain why, it just feels “brain-friendly”. Maybe it’s because the code structure follows the UI structure you see on the screen.

Conclusion

My first impression of Angular 2.0 was that I was surprised by the upfront costs (e.g. installing npm packages, setting up configuration files) compared to the earlier versions where I just linked to a CDN and then I was ready to start coding.

On top of that, I also needed to create classes and hand-code the calls to the REST Service, which also felt like a hassel when your fingers are aching to start coding all the fun stuff!

But once you have made this initial investment, it starts to pay off (easier to rename classes, typos in the code are spotted instantly, really nice and fast auto-completion), which was extremely helpful when I started coding the components.

The components were the most positive surprise. It just felt like a much nicer way to structure the code compared to the old MVC approach.

If you want to get started with Angular 2, I can recommend the architecture overview document for a quick overview of the framework, the 5 Min Quickstart to learn how to install the framework from scratch, and the Tour of Heroes tutorial as a great way to learn a lot about the functionality that the framework offers.

Write Beautiful REST Documentation with Swagger

Swagger is the most widely used standard for specifying and documenting REST Services.

The real power of the Swagger standard comes from the ecosystem of powerful tools that surrounds it.

For example, there’s Swagger Editor for writing the Swagger spec, Swagger Codegen for automatically generating code based on your Swagger spec, and Swagger UI for turning your Swagger spec into beautiful documentation that your API users will love to read.

Why use Swagger?

But why not use another standard (like RAML) or simply open your favorite word processor and start hitting the keys?

There are 5 good reasons for using Swagger:

  1. Industry Standard: Swagger is the most widely adopted documentation and specification standard for REST Services. This means that it’s already used in real production APIs, so you don’t have to be the beta tester. It also means that the API user has probably already experience with Swagger, which dramatically reduces the learning curve.
  2. Designed for REST: Swagger is really easy to use, because it’s a single-purpose tool for documenting REST Services. So most of the complicated things, like security or reusing resource definitions across several methods, are already handled gracefully by the standard.
  3. Huge Community: There’s a great community around Swagger, so when you face a problem, you can usually just Google the solution.
  4. Beautiful Documentation: The customer-facing documentation looks really nice. Plus there is a built-in way to actually call the services, so the API user won’t need to use an external tool to play around with the services, but can just do it inside the documentation.
  5. Auto-generate Code: You can auto-generate client and server code (interface part) based on the Swagger spec, which makes sure that they are consistent. You could even make your own tools.

How to get started with Swagger?

To start writing a Swagger spec, you simply open the online Swagger Editor and start writing according to the Swagger specification.

You can see a screenshot of the Swagger Editor below. You write your spec in the left-hand side, and you can see the resulting documentation in the right-hand side:

For this post, I’ve created a Swagger specification for the Movie REST Service, which Sandeep Panda developed as part of his post on Angular’s $resource.

If you want to play with the example I use in this section:

  1. Open the Swagger Editor.
  2. Open the “File” menu, and select “Import URL…”
  3. Enter http://www.kennethlange.com/resources/movie_swagger.yaml in the box.

Now let’s walkthrough the example spec!

Part 1: General Information

The first thing that you will notice is that Swagger is written in YAML, which is a format that is very easy to read — even for non-technical people.

In the top part of the Swagger specification, you write all the general stuff about your API:

swagger: '2.0'

################################################################################
#                              API Information                                 #
################################################################################
info:
  version: "v1"
  title: REST API for 'The Movie App'
  description: |
    The is a demo Swagger Spec for the sample REST API used by The Movie App that Sandeep Panda developed as part of his great blog post [Creating a CRUD App in Minutes with Angular's $resource](http://www.sitepoint.com/creating-crud-app-minutes-angulars-resource/).
    
host: movieapp-sitepointdemos.rhcloud.com
basePath: /api

Here is an explaination of some of the properties:

  • swagger: This is to say we use Swagger 2.0. It should always be “2.0”.
  • title: The title of your API documentation.
  • description: A description of your API. It is always nice with examples.
  • version: The version of your API (remember that for APIs a low version number is always more attractive, because a high number indicates an unstable interface and hence an extra burden on the clients using it.)
  • host: The server where your REST API is located.
  • basePath: The path on the server where your REST API is located.

Part 2: REST Services

In the middle part, you define the paths and HTTP Methods.

I have only included PUT below, but you can see the rest in my Swagger file.

################################################################################
#                                           Paths                              #
################################################################################
paths:
  /movies/{id}:
    put:
      summary: Update a movie
      consumes:
        - application/json 
      produces:
        - application/json 
      parameters:
        - in: path
          name: id
          type: number
          description: The id of the movie you want to update.
          required: true
        - in: body
          name: movie
          description: The movie you want update with.
          required: true
          schema:
            $ref: '#/definitions/Movie'
      responses:
        200:
          description: The movie has been successfully updated.
          schema:
            $ref: '#/definitions/Message'

Below paths you define a path (e.g. /movies/{id}) and then you define the HTTP methods (e.g. PUT) that the path can be used with.

  • summary: A short description of the service. There is also a description property for a more lengthy description, if necessary.
  • consumes: The content type of the data that the service consumes (you can have multiple types). The most common is application/json.
  • produces: The content type of the data that the service produces (you can have multiple types). The most common is application/json.
  • parameters: The different parameters that the service accepts. It is both parameters in the HTTP header, URI path, query string and HTTP request body.
    • in: Where is the parameter located? In the path, in the body, in a header, or somewhere else?
    • name: The name of the parameter.
    • type: The data type of the parameter. The common types are number and string.
    • description: A short, user-friendly description of the parameter.
    • required: Is the parameter required or optional?
  • responses: The possible responses that the service can return.
    • (HTTP Status Code): You first specify the HTTP Status Code (e.g. 200).
      • description: A short description of when this response happens.
      • schema: A definition of the response object (see next section for details).

Part 3: Resource Definitions

In the last part of the Swagger spec, you have shared resource definitions.

Given that the movie resource representation is used in almost all methods, it makes sense to write the resource definition in a single place and reuse it across the methods.

################################################################################
#                                 Definitions                                  #
################################################################################
definitions:
  Movie:
    type: object
    properties:
      _id:
        type: number
        description: A unique identifier of the movie. Automatically assigned by the API when the movie is created.
      title:
        type: string
        description: The official title of the movie. 
      releaseYear:
        type: string
        description: The year that the movie was released.
      director:
        type: string
        description: The director of the movie.
      genre:
        type: string
        description: The genre of the movie.
      __v:
        type: number
        description: An internal version stamp. Not to be updated directly.

Below definitions you define a resource type (i.e. Movie) and then you define its properties below:

  • type: The data type of the property. The common ones are string and number. The advanced types are objects and arrays.
  • description: A description of the property.
  • properties: If the data type is an object, you specify the object’s properties below.

If you need to define complex JSON objects, you can be inspired by the great examples found in Swagger Editor. You can find them by opening the “File” menu, and select “Open Example…”

How to turn your Swagger spec into API Documentation

Once your Swagger spec is stable — and your REST API is operational — you can publish your Swagger spec as customer-facing documentation.

For this purpose you can use Swagger UI, which converts your Swagger spec into a beautiful, interactive API documentation (you can see an online example here).

You can download Swagger UI from here. It is just a bundle of HTML, CSS and JS files, which doesn’t require a framework or anything, so they can be installed in a directory on any HTTP server.

Once you have downloaded it, you put your swagger.yaml file into the dist directory — and open index.html and change it to point at your swagger file instead of http://petstore.swagger.io/v2/swagger.json.

Then you can open index.html in your browser, and see your new beautiful, interactive API documentation:

That’s it! Now you have learned all the basic elements of Swagger. Don’t forget to read Swagger specification if you really want to become a Swagger expert.