GITBOOK-2: No subject

This commit is contained in:
Cesareo
2024-05-10 09:32:22 +00:00
committed by gitbook-bot
parent 2a0df79e63
commit b1e72ce21d
70 changed files with 529 additions and 676 deletions

View File

@@ -25,7 +25,7 @@ The API Guidelines are split into two main parts:
* [REST APIs Guidelines](rest-api-guidelines/rest.md)
* [Asynchronous APIs Guidelines](asynchronous-api-guidelines/index.md)
The general guidelines section discusses the core principles relevant to any kind of API. The API type-specific section further defines the guidelines specific to a given architectural style or API technique \(such as REST, Kafka or GraphQL APIs\).
The general guidelines section discusses the core principles relevant to any kind of API. The API type-specific section further defines the guidelines specific to a given architectural style or API technique (such as REST, Kafka or GraphQL APIs).
### How to read the Guidelines
@@ -33,7 +33,7 @@ These Guidelines are available for online reading at [GitBook](https://adidas.gi
The CAPITALIZED words throughout these guidelines have a special meaning:
> ```text
> ```
> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
> "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
> this document are to be interpreted as described in RFC2119.
@@ -43,7 +43,7 @@ Refer to [RFC2119](https://www.ietf.org/rfc/rfc2119) for details.
### Validating your API Guidelines against OpenAPI Specification
In the `ruleset.md` file you can find a digest of API Guidelines rules which you can validating your API description documents with. If you are using OpenAPI Specification as the API description format you can also leverage the `.spectral.yaml` ruleset to automatically verify your specification compliance using [Spectral](github.com/stoplightio/spectral).
In the `ruleset.md` file you can find a digest of API Guidelines rules which you can validating your API description documents with. If you are using OpenAPI Specification as the API description format you can also leverage the `.spectral.yaml` ruleset to automatically verify your specification compliance using [Spectral](github.com/stoplightio/spectral/).
To install Spectral you will need Node.js and a package manager (npm or yarn).
@@ -73,13 +73,13 @@ adidas is not responsible for the usage of this software for different purposes
## Last Review
February 2021
May 2024
## License and Software Information
© adidas AG
adidas AG publishes this software and accompanied documentation \(if any\) subject to the terms of the MIT license with the aim of helping the community with our tools and libraries which we think can be also useful for other people. You will find a copy of the MIT license in the root folder of this package. All rights not explicitly granted to you under the MIT license remain the sole and exclusive property of adidas AG.
adidas AG publishes this software and accompanied documentation (if any) subject to the terms of the MIT license with the aim of helping the community with our tools and libraries which we think can be also useful for other people. You will find a copy of the MIT license in the root folder of this package. All rights not explicitly granted to you under the MIT license remain the sole and exclusive property of adidas AG.
NOTICE: The software has been designed solely for the purpose of providing API design and development guidelines. The software is NOT designed, tested or verified for productive use whatsoever, nor or for any use related to high risk environments, such as health care, highly or fully autonomous driving, power plants, or other critical infrastructures or services.
@@ -90,4 +90,3 @@ For further information open the [adidas terms and conditions](https://github.co
### License
[MIT](https://github.com/adidas/api-guidelines/blob/master/LICENSE)

View File

@@ -7,17 +7,20 @@
* [Introduction](general-guidelines/general-guidelines.md)
* [API First](general-guidelines/api-first.md)
* [Contract](general-guidelines/contract.md)
* [Immutability](general-guidelines/c\_immutability.md)
* [Robustness](general-guidelines/robustness.md)
* [Common Data Types](general-guidelines/d\_data\_types.md)
* [Version Control System](general-guidelines/version-control-system.md)
* [Minimal API Surface](general-guidelines/minimal-api-surface.md)
* [Rules for Extending](general-guidelines/rules-for-extending.md)
* [JSON](general-guidelines/json.md)
* [Security](general-guidelines/security.md)
* [Tooling](general-guidelines/n\_tooling.md)
## REST API Guidelines
* [Introduction](rest-api-guidelines/rest.md)
* [Core Principles](rest-api-guidelines/core-principles/README.md)
* [Core REST Principles](rest-api-guidelines/core-principles/README.md)
* [OpenAPI Specification](rest-api-guidelines/core-principles/openapi-specification.md)
* [API Design Platform](rest-api-guidelines/core-principles/design-platform.md)
* [Design Maturity](rest-api-guidelines/core-principles/design-maturity.md)
@@ -66,35 +69,42 @@
## Asynchronous API Guidelines
* [Introduction to guidelines](asynchronous-api-guidelines/01_introduction/a_introduction.md)
* Basic Concepts
* [Event Driven Architectures](asynchronous-api-guidelines/01_introduction/b_basic_concepts_edas.md)
* [Basic terminology](asynchronous-api-guidelines/01_introduction/c_basic_concepts_terminology.md)
* [Events](asynchronous-api-guidelines/01_introduction/d_basic_concepts_events.md)
* Asynchronous API Guidelines
* [Contract](asynchronous-api-guidelines/02_asynchronous_api_guidelines/a_contract.md)
* [API First](asynchronous-api-guidelines/02_asynchronous_api_guidelines/b_api_first.md)
* [Immutability](asynchronous-api-guidelines/02_asynchronous_api_guidelines/c_immutability.md)
* [Common Data Types](asynchronous-api-guidelines/02_asynchronous_api_guidelines/d_data_types.md)
* [Automatic Schema Registration](asynchronous-api-guidelines/02_asynchronous_api_guidelines/e_schema_registration.md)
* [Schema Data Evolution](asynchronous-api-guidelines/02_asynchronous_api_guidelines/f_schema_data_evolution.md)
* [Key/Value format](asynchronous-api-guidelines/02_asynchronous_api_guidelines/g_key_value_format.md)
* [Message Headers](asynchronous-api-guidelines/02_asynchronous_api_guidelines/h_message_headers.md)
* [Naming Conventions](asynchronous-api-guidelines/02_asynchronous_api_guidelines/i_naming_conventions.md)
* [Protocols](asynchronous-api-guidelines/02_asynchronous_api_guidelines/j_protocols.md)
* [Security](asynchronous-api-guidelines/02_asynchronous_api_guidelines/k_security.md)
* AsyncAPI specs for Kafka
* [Introduction](asynchronous-api-guidelines/03_asyncapi_kafka_specs/a_introduction.md)
* [AsyncAPI version](asynchronous-api-guidelines/03_asyncapi_kafka_specs/b_asyncapi_version.md)
* [Internal vs Public specs](asynchronous-api-guidelines/03_asyncapi_kafka_specs/c_internal_public_specs.md)
* [Spec granularity](asynchronous-api-guidelines/03_asyncapi_kafka_specs/d_spec_granularity.md)
* [Meaningful descriptions](asynchronous-api-guidelines/03_asyncapi_kafka_specs/e_meaningful_descriptions.md)
* [Self-contained specs](asynchronous-api-guidelines/03_asyncapi_kafka_specs/f_self_contained_specs.md)
* [Contact Information](asynchronous-api-guidelines/03_asyncapi_kafka_specs/g_contact_information.md)
* [AsyncAPI ID](asynchronous-api-guidelines/03_asyncapi_kafka_specs/h_asyncapi_id.md)
* [Servers](asynchronous-api-guidelines/03_asyncapi_kafka_specs/i_servers.md)
* [Channels](asynchronous-api-guidelines/03_asyncapi_kafka_specs/j_channels.md)
* [Schemas](asynchronous-api-guidelines/03_asyncapi_kafka_specs/k_schemas.md)
* [Security Schemes](asynchronous-api-guidelines/03_asyncapi_kafka_specs/l_security_schemes.md)
* [External Docs](asynchronous-api-guidelines/03_asyncapi_kafka_specs/m_external_docs.md)
* [Tooling](asynchronous-api-guidelines/03_asyncapi_kafka_specs/n_tooling.md)
* [Introduction](asynchronous-api-guidelines/01\_introduction/a\_introduction.md)
* [Core Asynchronous Principles](asynchronous-api-guidelines/core-asynchronous-principles/README.md)
* [Event Driven Architectures](asynchronous-api-guidelines/core-asynchronous-principles/b\_basic\_concepts\_edas.md)
* [Events](asynchronous-api-guidelines/core-asynchronous-principles/d\_basic\_concepts\_events/README.md)
* [Events as Notifications](asynchronous-api-guidelines/core-asynchronous-principles/d\_basic\_concepts\_events/events-as-notifications.md)
* [Events to Replicate Data](asynchronous-api-guidelines/core-asynchronous-principles/d\_basic\_concepts\_events/events-to-replicate-data.md)
* [Protocols](asynchronous-api-guidelines/core-asynchronous-principles/j\_protocols.md)
* [Commands](asynchronous-api-guidelines/core-asynchronous-principles/commands.md)
* [Query](asynchronous-api-guidelines/core-asynchronous-principles/query.md)
* [Coupling](asynchronous-api-guidelines/core-asynchronous-principles/coupling.md)
* [Bounded Context](asynchronous-api-guidelines/core-asynchronous-principles/bounded-context.md)
* [Stream Processing](asynchronous-api-guidelines/core-asynchronous-principles/stream-processing.md)
* [Naming Conventions](asynchronous-api-guidelines/core-asynchronous-principles/i\_naming\_conventions.md)
* [Tooling](asynchronous-api-guidelines/core-asynchronous-principles/tooling/README.md)
* [Editors](asynchronous-api-guidelines/core-asynchronous-principles/tooling/editors.md)
* [Command Line Interface (CLI)](asynchronous-api-guidelines/core-asynchronous-principles/tooling/command-line-interface-cli.md)
* [Generators](asynchronous-api-guidelines/core-asynchronous-principles/tooling/generators.md)
* [Kafka Asynchronous Guidelines](asynchronous-api-guidelines/kafka-asynchronous-guidelines/README.md)
* [Introduction](asynchronous-api-guidelines/kafka-asynchronous-guidelines/a\_introduction/README.md)
* [Why AsyncAPI?](asynchronous-api-guidelines/kafka-asynchronous-guidelines/a\_introduction/why-asyncapi.md)
* [AsyncAPI Version](asynchronous-api-guidelines/kafka-asynchronous-guidelines/b\_asyncapi\_version.md)
* [Internal vs Public Specifications](asynchronous-api-guidelines/kafka-asynchronous-guidelines/c\_internal\_public\_specs.md)
* [Key/Value Format](asynchronous-api-guidelines/kafka-asynchronous-guidelines/g\_key\_value\_format.md)
* [Message Headers](asynchronous-api-guidelines/kafka-asynchronous-guidelines/h\_message\_headers.md)
* [Specification Granularity](asynchronous-api-guidelines/kafka-asynchronous-guidelines/d\_spec\_granularity.md)
* [Meaningful Descriptions](asynchronous-api-guidelines/kafka-asynchronous-guidelines/e\_meaningful\_descriptions.md)
* [Self-Contained Specifications](asynchronous-api-guidelines/kafka-asynchronous-guidelines/f\_self\_contained\_specs.md)
* [Schema Data Evolution](asynchronous-api-guidelines/kafka-asynchronous-guidelines/f\_schema\_data\_evolution/README.md)
* [Backward Compatibility](asynchronous-api-guidelines/kafka-asynchronous-guidelines/f\_schema\_data\_evolution/backward-compatibility.md)
* [Forward Compatibility](asynchronous-api-guidelines/kafka-asynchronous-guidelines/f\_schema\_data\_evolution/forward-compatibility.md)
* [Full Compatibility](asynchronous-api-guidelines/kafka-asynchronous-guidelines/f\_schema\_data\_evolution/full-compatibility.md)
* [Automatic Schema Registration](asynchronous-api-guidelines/kafka-asynchronous-guidelines/e\_schema\_registration.md)
* [Contact Information](asynchronous-api-guidelines/kafka-asynchronous-guidelines/g\_contact\_information.md)
* [AsyncAPI ID](asynchronous-api-guidelines/kafka-asynchronous-guidelines/h\_asyncapi\_id.md)
* [Servers](asynchronous-api-guidelines/kafka-asynchronous-guidelines/i\_servers.md)
* [Channels](asynchronous-api-guidelines/kafka-asynchronous-guidelines/j\_channels.md)
* [Schemas](asynchronous-api-guidelines/kafka-asynchronous-guidelines/k\_schemas.md)
* [Security Schemes](asynchronous-api-guidelines/kafka-asynchronous-guidelines/l\_security\_schemes.md)
* [External Docs](asynchronous-api-guidelines/kafka-asynchronous-guidelines/m\_external\_docs.md)

View File

@@ -1,22 +1,10 @@
# adidas Asynchronous API guidelines
## Introduction
### About guidelines
In the scope of a company, organization or team, the document known as guidelines is a set of best practices or hints provided by a group of subject matter experts in that particular technology. The most important aspects of it:
- Help to create a standardized way of completing specific tasks, making outcome more predictable and alike
- Help to identify do's and dont's with regards to a specific technology or tool
- Help to avoid gotchas or problems related with company specifics
- Gather the knowledge of several Subject Matter Experts and prevent others for falling in frequent caveats or mistakes
**Note** In any case, the content in the guidelines should be taken as recommendations, not something that needs to be done in a mandatory way.
# Introduction
## adidas Asynchronous API Guidelines
Adidas Asynchronous API Guidelines define standards and guidelines for building Asynchronous APIs at adidas. **These Guidelines have to be followed in addition to the Adidas** [**General API Guidelines.**](../../general-guidelines/general-guidelines.md)
The Asynchronous API Guidelines are further split into the following parts:
* **Core Asynchronous Principles**
* **Kafka Asynchronous Guidelines**

View File

@@ -1,75 +0,0 @@
# adidas Asynchronous API guidelines
## Basic concepts about asynchronous APIs
### Event-driven architectures
#### What is an event-driven architecture
Event-Driven Architectures (EDAs) are a paradigm that promotes the production, consumption and reaction to events.
This architectural pattern may be applied by the design and implementation of applications and systems that transmit events amongst loosely coupled software components and services.
An event-driven system typically consists of event emitters (or agents), event consumers (or sinks), and event channels.
- Producers (or publishers) are responsible for detecting, gathering and transferring events
- Are not aware of consumers
- Are not aware of how the events are consumed
- Consumers (or subscribers) react to the events as soon as they are produced
- The reaction can be self-contained or it can be a composition of processes or components
- Event channels are conduits in which events are transmited from emitters to consumers
**Note** Producer and Consumer role is not exclusive. In other words, the same client or application can be producer and consumer at the same time.
In most cases, EDAs are broker-centric, as seen in the diagram below.
![EDA overview](../../assets/eda_overview.png)
*The figure above was taken from AsyncAPI official documentation*
#### Problem statement
Typically, the architectural landscape of a big company grows in complexity and as a result of that it is possible to end up with a bunch of direct connections between a myriad of different components or modules.
![Typical architecture diagram](../../assets/eda_problem_statement_1.png)
By using streaming patterns, it is possible to get a much cleaner architecture
![EDA architecture diagram](../../assets/eda_problem_statement_2.png)
It is important to take into account that EDAs are not a silver bullet, and there are situations in which this kind of architectures might not fit very well.
One example is systems that heavily rely on transactional operations... of course it might be possible to use EDA but most probably the complexity of the resulting architecture would be too high.
Also, it is important to note that it is possible to mix request-driven and event-driven protocols in the same system. For example,
- Online services that interact directly with a user fits better into the synchronous communication but they also can generate events into Kafka.
- On the other hand, offline services (billing, fulfillment, etc) are typically built purely with events.
#### Kafka as the heart of EDAs
There are several technologies to implement event-driven architectures, but this section is going to focus on the predominant technology on this subject : Apache Kafka.
**Apache Kafka** can be considered as a Streaming Platform which relies on the several concepts:
- Super high-performance, scalable, highly-available cluster of brokers
- Availability
- Replication of partitions across different brokers
- Scalability
- Partitions
- Ability to rebalance partitions across consumers automatically when adding/removing them
- Performance
- Partitioned, replayable log (collection of messages appended sequentially to a file)
- Data copied directly from disk buffer to network buffer (zero copy) without even being imported to the JVM
- Extreme throughput by using the concept of consumer group
- Security
- Secure encrypted connections using TLS client certificates
- Multi-tenant management through quotas/acls
- Client APIs on different programming languages : Go, Scala, Python, REST, JAVA, ...
- Stream processing APIs like Kafka Streams
- Ecosystem of connectors to pull/push data from/to Kafka
- Clean-up processes for storage optimization
- Retention periods
- Compacted topics

View File

@@ -1,72 +0,0 @@
# adidas Asynchronous API guidelines
## Basic concepts about asynchronous APIs
### Basic terminology
#### Events
An event is both a fact and a notification, something that already happened in the real world.
- No expectation on any future action
- Includes information about a status change that just happened
- Travels in one direction and it never expects a response (fire and forget)
- Very useful when...
- Loose coupling is important
- When the same piece of information is used by several services
- When data needs to be replicated across application
A message in general is any interaction between an emitter and a receiver to exchange information. This implies that any event can be considered a messages but not the other way around.
#### Commands
A command is a special type of message which represents just an action, something that will change the state of a given system.
- Typically synchronous
- There is a clear expectation about a state change that needs to take place in the future
- When returning a response indicate completion
- Optionally they can include a result in the response
- Very common to see them in orchestration components
#### Query
It is a special type of message which represents a request to look something up.
- They are always free of side effects (leaves the system unchanged)
- They always require a response (with the requested data)
#### Coupling
The term coupling can be understood as the impact that a change in one component will have on other components. In the end, it is related to the amount of things that a given component shares with others. The more is shared, the more tight is the coupling.
**Note** A tighter coupling is not necessarily a bad thing, it depends on the situation. It will be necessary to assess the tradeoff between provide as much information as possible and to avoid having to change several components as a result of something changing in other component.
The coupling of a single component is actually a function of these factors:
- Information exposed (Interface surface area)
- Number of users
- Operational stability and performance
- Frequency of change
Messaging helps bulding loosely coupled services because it moves pure data from a highly coupled location (the source) and puts it into a loosely coupled location (the subscriber).
Any operations that need to be performed on the data are done in each subscriber and never at the source. This way, messaging technologies (like Kafka) take most of the operational issues off the table.
All business systems in larger organizations need a base level of essential data coupling. In other words, functional couplings are optional, but core data couplings are essential.
#### Bounded context
A bounded context is a small group of services that share the same domain model, are usually deployed together and collaborate closely.
It is possible to put an analogy here with a hierarchic organization inside a company :
- Different departments are loosely coupled
- Inside departments there will be a lot more interactions across services and the coupling will be tighter
One of the big ideas of Domain-Driven Design (DDD) was to create boundaries around areas of a business domain and model them separately. So within the same bounded context the domain model is shared and everything is available for everyone there.
However, different bounded contexts don't share the same model and if they need to interact they will do it through more restricted interfaces.
#### Stream processing
It can be understood as the capability of processing data directly as it is produced or received (hence, in real-time or near to real-time).

View File

@@ -1,31 +0,0 @@
# adidas Asynchronous API guidelines
## Basic concepts about asynchronous APIs
### Using events in an EDA
There are several ways to use events in a EDA:
- Events as notifications
- Events to replicate data
#### Events as notifications
When a system uses events as notifications it becomes a pluggable system. The producers have no knowledge about the consumers and they don't really care about them, instead every consumer can decide if it is interested in the information included in the event.
This way, the number of consumers can be increased (or reduced) without changing anything on the producer side.
This pluggability becomes increasily important as systems get more complex.
#### Events to replicate data
When events are used to replicate data across services, they include all the necessary information for the target system to keep it locally so that it can be queried with no external interactions.
This is usually called event-carried state transfer which in the end is a form of data integration.
The benefits are similar to the ones implied by the usage of a cache system
- Better isolation and autonomy, as the data stays under service's control
- Faster data access, as the data is local (particularly important when combining data from different services in different geographies)
- Offline data availability

View File

@@ -1,9 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Contract
The definition of an asynchronous API **MUST** represent a contract between API owners and the stakeholders.
That contract **MUST** contain enough information to use the API (servers, URIs, credentials, contact information, etc) and to identify which kind of information is being exchanged there.

View File

@@ -1,10 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### API First
Asynchronous APIs **SHOULD** use the API First principle :
- The API designs **SHOULD** involve all relevant stakeholders (developers, consumers, ...) to ensure that final design fulfil requirements from different perspectives
- The resulting API specification will be the source of truth rather than the API implementation

View File

@@ -1,9 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Immutability
After agreement with the stakeholders the contract **MUST** be published in order to make it immutable. Changes to the API related to the data model, **MUST** be published in a schema registry.
Schema registry acts as a central location for storing and accessing the schemas of all published APIs.

View File

@@ -1,18 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Common data types
The API types **MUST** adhere to the formats defined below:
| Data type | Standard | Example |
| --------- | -------- | ------- |
| Date and Time | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-06-21T14:07:17Z (Always use UTC) |
| Date | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-06-21 |
| Duration | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | P3Y6M4DT12H30M5S |
| Time interval | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2007-03-01T13:00:00Z/2008-05-11T15:30:00Z |
| Timestamps | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-01-01T12:00:00Z |
| Language Codes | [ISO 639](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) | en <-> English |
| Country Code | [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) | DE <-> Germany |
| Currency | [ISO 4217](https://en.wikipedia.org/wiki/ISO_4217) | EUR <-> Euro |

View File

@@ -1,7 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Automatic schema registration
Applications **MUST NOT** enable automatic registration of schemas because in adidas schemas have a separate lifecycle, intended to be independent from API contract and API implementing code.

View File

@@ -1,101 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Schemas and data evolution
All asynchronous APIs **SHOULD** leverage Schema Registry to ensure consistency across consumers/producers with regards to message structure and ensuring compatibility across different versions.
The default compatibility mode **SHOULD** be FULL_TRANSITIVE, which is the default compatibility mode in adidas for Schema Registry. Check the sections below to know more about compatibility modes.
#### Compatibility modes
Once a given schema is defined, it is unavoidable that the schema evolves with time. Everytime this happens, downstream consumers need to be able to handle data with both old and new schemas seamlessly.
Each new schema version is validated according to the configuration before being created as a new version. Namely, it is checked against the configured compatibility types (see below).
**Important** The mere fact of enabling Schema Registry is not enough to ensure that there are no compatibility issues in a given integration. The right compatibility mode needs also to be selected and enforced.
As a summary, the available compatibility types are listed below:
| Mode | Description |
|------|-------------|
|BACKWARD|new schema versions are backward compatible with older versions|
|BACKWARD_TRANSITIVE|backward compatibility across all schema versions, not just the latest one.|
|FORWARD|new schema versions are compatible with older consumer versions|
|FORWARD_TRANSITIVE|forward compatibility across all schema versions.|
|FULL|both backward and forward compatibility with the latest schema version|
|FULL_TRANSITIVE|both backward and forward compatibility with all schema versions|
|NONE|schema compatibility checks are disabled|
#### Backward compatibility
There are two variants here:
- BACKWARD - Consumers using a new version (X) of a schema can read data produced by the previous version (X - 1)
- BACKWARD_TRANSITIVE - Consumers using a new version (X) of a schema can read data produced by any previous version (X - 1, X - 2, ....)
The operations that preserve backward compatibility are:
- Delete fields
- Consumers with the newer version will just ignore the non-existing fields
- Add optional fields (with default values)
- Consumers will set the default value for the missing fields in their schema version
![Backward compatibility](../../assets/sr_backward_compatibility.png)
#### Forward compatibility
Also two variants here:
- FORWARD - Consumers with previous version of the schema (X - 1) can read data produced by Producers with a new schema version (X)
- FORWARD_TRANSITIVE - Consumers with any previous version of the schema (X - 1, X - 2, ...) can read data produced by Producers with a new schema version (X)
The operations that preserve forward compatibility are:
- Adding a new field
- Consumers will ignore the fields that are not defined in their schema version
- Delete optional fields (with default values)
- Consumers will use the default value for the missing fields defined in their schema version
![Forward compatibility](../../assets/sr_forward_compat.png)
#### Full compatibility
This is a combination of both compatibility types (backward and forward). It also has 2 variants:
- FULL - Backward and forward compatible between schemas X and X - 1.
- FULL_TRANSITIVE - Backward and forward compatible between schemas X and all previous ones (X - 1, X - 2, ...)
**Important** Once more, FULL_TRANSITIVE is the default compatibility mode in adidas, it is set at cluster level and all new schemas will inherit it
This mode is preserved only if using the following operations
- Adding optional fields (with default values)
- Delete optional fields (with default values)
![Full compatibility](../../assets/sr_full_compat.png)
#### Upgrading process of clients based on compatibility
Depending on the compatibility mode, the process of upgrading producers/consumers will be different based on the compatibility mode enabled.
- NONE
- As there are no compatibility checks, no order will grant a smooth transition
- In most of the cases this lead to having to create a new topic for this evolution
- BACKWARD / BACKWARD_TRANSITIVE
- Consumers **MUST** be upgraded first before producing new data
- No forward compatibility, meaning that there's no guarantee that the consumers with older schemas are going to be able to read data produced with a new version
- FORWARD / FORWARD_TRANSITIVE
- Producers **MUST** be upgraded first and then after ensuring that no older data is present, upgrade the consumers
- No backward compatibility, meaning that there's no guarantee that the consumers with newer schemas are going to be able to read data produced with an older version
- FULL / FULL TRANSITIVE
- No restrictions on the order, anything will work
#### How to deal with breaking changes
If for any reason you need to use a less strict compatibility mode in a topic, or you can't avoid breaking changes in a given situation, the compatibility mode **SHOULD NOT** be modified on the same topic.
Instead, a new topic **SHOULD** be used to avoid unexpected behaviors or broken integrations. This allows a smooth transitioning from clients to the definitive topic, and once all clients are migrated the original one can be decommissioned.
Alternatively, instead of modifying existing fields it **MAY** be considered as an suboptimal approach to add the changes in new fields and have both coexisting. Take into account that this pollutes your topic and it can cause some confusion.

View File

@@ -1,11 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Key/Value message format
Kafka messages **MAY** include a key, which needs to be properly designed to have a good balance of data across partitions.
The message key and the payload (often called value) can be serialized independently and can have different formats. For example, the value of the message can be sent in AVRO format, while the message key can be a primitive type (string). 
Message keys **SHOULD** be kept as simple as possible and use a primitive type when possible.

View File

@@ -1,11 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Naming conventions
As general naming conventions, asynchronous APIs **MUST** adhere to the following conventions
- Use of english
- Avoid acronyms or explain them when used
- Use camelCase unless stated otherwise

View File

@@ -1,16 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Protocols
Protocols define how clients and servers communicate in an asynchronous architecture.
The accepted protocols for asynchronous APIs are:
- Kafka
- HTTPs
- WebSockets
- MQTT
This version of the guidelines focuses on Kafka protocol, but it could be extended in the future. In any case, this document will be updated to reflect the state of the art.

View File

@@ -1,7 +0,0 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Security
The [security guidelines](https://github.com/adidas/api-guidelines/blob/feature/asyncapi-guidelines/general-guidelines/security.md) for regular APIs **MUST** be followed strictly when applicable.

View File

@@ -1,60 +0,0 @@
# adidas Asynchronous API guidelines
## Introduction to AsyncAPI spec definitions for Kafka protocol
This section is specific to the definition of API specs with [AsyncAPI](https://www.asyncapi.com/) for Kafka protocol.
Also, take into account that across the section there will be multiple references to this [AsyncAPI reference spec](https://design.api.3stripes.io/apis/adidas/asyncapi-adoption-initiative/1.0.0) which is publicly available for reference. 
### Basic concepts about AsyncAPI
#### Why AsyncAPI?
Event-driven architectures are becoming increasingly popular for building scalable, responsive, and efficient applications. AsyncAPI plays a crucial role in this landscape by offering a standardized way to describe asynchronous APIs, similar to how OpenAPI does for REST APIs. AsyncAPI seeks to make the development, maintenance, and testing of asynchronous APIs easier by providing a machine-readable specification.
It supports various messaging protocols, including MQTT, WebSocket, Kafka, AMQP, and more, making it versatile for different use cases. In adidas, ASyncAPI is used mainly to document Kafka resources created across the company in the scope of the Streaming Platform, but nothing prevents you from using it for a different purpose.
The benefits of using AsyncAPI are, amongst others:
- Standardization
- AsyncAPI defines a STANDARD format (YAML or JSON) for describing asynchronous APIs.
- By defining the structure of messages, channels, and events, you can ensure that all components adhere to the same conventions.
- Using a single standard ensures consistency in the design and documentation of all your asynchronous APIs.
- This simplifies integration, maintenance, and troubleshooting across different parts of your system.
- Improved Developer Experience
- AsyncAPI documents the messages being exchanged, their structure, and the events triggered by them.
- It provides developers with a clear picture of how to interact with the API, what data to expect, and how to interpret responses without digging into the implementation details. 
- Code scaffolding
- Using tools like asyncapi-generator allow to easily generate the skeleton of applications that can work with the resources described in the spec.
- This can be done in different programming languages (Python, Java, Node.js. ...), reducing significantly the development time and the coding errors.
- Design-first approach: It encourages designing the API first before writing code, leading to better planned and more reliable APIs.
In addition to those benefits, Platform & Engineering is working hard to create a data catalogue built upon AsyncAPI that allows to have a good level of discoverability, allowing teams to be able to find exactly the data they need with regards to any data object in the company.
Questions like:
- Who is responsible for a specific data object
- Where is that data hosted
- Which kind of information is available
Will be easy to answer once this catalogue is in place. Also, it is important to have a good discoverability and search & filtering capabilities.
#### Kafka to AsyncAPI concept mapping
|Kafka Concept|AsyncAPI Concept|
|-------------|----------------|
|broker|server|
|topic|channel|
|consumer|subscriber|
|producer|publisher|
#### First level items in AsyncAPI structure
|Element|Meaning|
|-------|-------|
|asyncapi|Specifies the AsyncAPI specification version|
|info|Provides metadata about the API such as the version, title and description|
|servers|Describes servers where the API is available|
|channels|Defines the channels through which messages are received/published|
|components|Reusable elements to be references across the spec|

View File

@@ -1,14 +0,0 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### AsyncAPI version
Any version of AsyncAPI **MAY** be used for spec definitions.
However, to be aligned with adidas tooling, spec versions **SHOULD** be *v2.6.0*, because to the date of this document creation (April 2024) this is the highest supported version on Swaggerhub, the current API portal to render, discover and publish specs.
```yaml
asyncapi: 2.6.0
...
```

View File

@@ -1,7 +0,0 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Meaningful descriptions
All fields included in the specs **MUST** include a proper description. 

View File

@@ -1,7 +0,0 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Self-contained specs
All AsyncAPI specs **SHOULD** include as much information as needed in order to make the spec self-contained and clearly documented

View File

@@ -1,14 +0,0 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### External docs
The external docs **SHOULD** be used to refer to LeanIX factsheet associated to the spec.
```yaml
...
externalDocs:
description: LeanIX
url: https://adidas.leanix.net/adidasProduction/factsheet/Application/467ff391-876c-49ad-93bf-facafffc0178
```

View File

@@ -1,42 +0,0 @@
# adidas Asynchronous API guidelines
## AsyncAPI tools
### API Design platform
The current platform available in adidas to design, host, and render AsyncAPI specs is [Swaggerhub](https://design.api.3stripes.io/).
Every AsyncAPI spec **MUST** be hosted in Swaggerhub under the *adidas* organization.
In the future, some mechanisms will be provided to auto generate AsyncAPI specs automatically from the assets created in the Streaming Platform. Until then, those specs will be created manually in the platform following the API-first approach if possible.
**Important note** Swaggerhub has limited capabilities with regards to discoverability, search and filtering of APIs. Other alternatives are being evaluated. Any upcoming decision impacting this will be reflected in this document in the future.
### Editors
Aside from Swaggerhub editing capabilities, other alternative editor options are available:
- AsyncAPI Studio: A web-based editor designed specifically for creating and validating AsyncAPI documents.
- Visual Studio Code: VS Code can be extended with plugins like "AsyncAPI for VS Code" to provide AsyncAPI-specific features,  for editing AsyncAPI files.
### Command Line Interface (CLI) tool
Unfortunately, Swaggerhub is not offering a Command Line Interface (CLI) tool which allows including this capability as part of CICD workflows. 
For this, there is an official AsyncAPI CLI tool which can be checked here: https://www.asyncapi.com/tools/cli. This includes a validator against the AsyncAPI spec, templated generators, version conversion, spec optimizer, bundler, etc.
For example, to validate a yaml spec file:
```
asyncapi validate --file your-asyncapi-file.yaml
```
### Generators
These tools are capable of generate a variety of outputs from any valid AsyncAPI spec, including:
- API documentation in various formats like HTML, Markdown, or OpenAPI
- Code samples in various programming languages like Python, Java, and Node.js based on your API definition. 
- Functionally complete applications
There is an official generator tool which can be checked here: https://www.asyncapi.com/docs/tools/generator.

View File

@@ -0,0 +1,3 @@
# Core Asynchronous Principles
This section outlines the foundation upon which the Asynchronous API Guidelines are built.

View File

@@ -0,0 +1,42 @@
# Event Driven Architectures
Event-Driven Architectures (EDAs) are a paradigm that promotes the production, consumption and reaction to events.
This architectural pattern may be applied by the design and implementation of applications and systems that transmit events amongst loosely coupled software components and services.
An event-driven system typically consists of event emitters (or agents), event consumers (or sinks), and event channels.
* Producers (or publishers) are responsible for detecting, gathering and transferring events
* Are not aware of consumers
* Are not aware of how the events are consumed
* Consumers (or subscribers) react to the events as soon as they are produced
* The reaction can be self-contained or it can be a composition of processes or components
* Event channels are conduits in which events are transmitted from emitters to consumers
**Note** Producer and Consumer role is not exclusive. In other words, the same client or application can be producer and consumer at the same time.
In most cases, EDAs are broker-centric, as seen in the diagram below.
![EDA overview](../../assets/eda\_overview.png)
_The figure above was taken from AsyncAPI official documentation_
#### Problem statement
Typically, the architectural landscape of a big company grows in complexity and as a result of that it is possible to end up with a bunch of direct connections between a myriad of different components or modules.
![Typical architecture diagram](../../assets/eda\_problem\_statement\_1.png)
By using streaming patterns, it is possible to get a much cleaner architecture
![EDA architecture diagram](../../assets/eda\_problem\_statement\_2.png)
It is important to take into account that EDAs are not a silver bullet, and there are situations in which this kind of architectures might not fit very well.
One example is systems that heavily rely on transactional operations... of course it might be possible to use EDA but most probably the complexity of the resulting architecture would be too high.
Also, it is important to note that it is possible to mix request-driven and event-driven protocols in the same system. For example,
* Online services that interact directly with a user fits better into the synchronous communication but they also can generate events.
* On the other hand, offline services (billing, fulfillment, etc) are typically built purely with events.

View File

@@ -0,0 +1,12 @@
# Bounded Context
A bounded context is a small group of services that share the same domain model, are usually deployed together and collaborate closely.
It is possible to put an analogy here with a hierarchic organization inside a company :
* Different departments are loosely coupled
* Inside departments there will be a lot more interactions across services and the coupling will be tighter
One of the big ideas of Domain-Driven Design (DDD) was to create boundaries around areas of a business domain and model them separately. So within the same bounded context the domain model is shared and everything is available for everyone there.
However, different bounded contexts don't share the same model and if they need to interact they will do it through more restricted interfaces.

View File

@@ -0,0 +1,8 @@
# Commands
A command is a special type of message which represents just an action, something that will change the state of a given system.
* There is a clear expectation about a state change that needs to take place in the future
* When returning a response indicate completion
* Optionally they can include a result in the response
* Very common to see them in orchestration components

View File

@@ -0,0 +1,18 @@
# Coupling
The term coupling can be understood as the impact that a change in one component will have on other components. In the end, it is related to the amount of things that a given component shares with others. The more is shared, the more tight is the coupling.
**Note:** A tighter coupling is not necessarily a bad thing, it depends on the situation. It will be necessary to assess the tradeoff between provide as much information as possible and to avoid having to change several components as a result of something changing in other component.
The coupling of a single component is actually a function of these factors:
* Information exposed (Interface surface area)
* Number of users
* Operational stability and performance
* Frequency of change
Messaging helps building loosely coupled services because it moves pure data from a highly coupled location (the source) and puts it into a loosely coupled location (the subscriber).
Any operations that need to be performed on the data are done in each subscriber and never at the source. This way, messaging technologies (like Kafka) take most of the operational issues off the table.
All business systems in larger organizations need a base level of essential data coupling. In other words, functional couplings are optional, but core data couplings are essential

View File

@@ -0,0 +1,18 @@
# Events
An event is both a fact and a notification, something that already happened in the real world.
* No expectation on any future action
* Includes information about a status change that just happened
* Travels in one direction and it never expects a response (fire and forget)
* Very useful when...
* Loose coupling is important
* When the same piece of information is used by several services
* When data needs to be replicated across application
A message in general is any interaction between an emitter and a receiver to exchange information. This implies that any event can be considered a messages but not the other way around.
There are several ways to use events in a EDA:
* Events as notifications
* Events to replicate data

View File

@@ -0,0 +1,7 @@
# Events as Notifications
When a system uses events as notifications it becomes a plugged system. The producers have no knowledge about the consumers and they don't really care about them, instead every consumer can decide if it is interested in the information included in the event.
This way, the number of consumers can be increased (or reduced) without changing anything on the producer side.
This pluggability becomes increasily important as systems get more complex.

View File

@@ -0,0 +1,11 @@
# Events to Replicate Data
When events are used to replicate data across services, they include all the necessary information for the target system to keep it locally so that it can be queried with no external interactions.
This is usually called event-carried state transfer which in the end is a form of data integration.
The benefits are similar to the ones implied by the usage of a cache system
* Better isolation and autonomy, as the data stays under service's control
* Faster data access, as the data is local (particularly important when combining data from different services in different geographies)
* Offline data availability

View File

@@ -0,0 +1,7 @@
# Naming Conventions
As general naming conventions, asynchronous APIs **MUST** adhere to the following conventions
* Use of **English**
* **Avoid acronyms** or explain them when used
* Use **camelCase** unless stated otherwise

View File

@@ -0,0 +1,11 @@
# Protocols
Protocols define how clients and servers communicate in an asynchronous architecture.
The accepted protocols for asynchronous APIs are:
* **Kafka**
* [Kafka Asynchronous Guidelines ](../kafka-asynchronous-guidelines/)
* **HTTPS**
* **WebSockets**
* **MQTT**

View File

@@ -0,0 +1,6 @@
# Query
It is a special type of message which represents a request to look something up.
* They are always free of side effects (leaves the system unchanged)
* They always require a response (with the requested data)

View File

@@ -0,0 +1,3 @@
# Stream Processing
It can be understood as the capability of processing data directly as it is produced or received (hence, in real-time or near to real-time).

View File

@@ -0,0 +1,2 @@
# Tooling

View File

@@ -0,0 +1,11 @@
# Command Line Interface (CLI)
Unfortunately, Swaggerhub is not offering a Command Line Interface (CLI) tool which allows including this capability as part of CICD workflows.&#x20;
For this, there is an official AsyncAPI CLI tool which can be checked here: https://www.asyncapi.com/tools/cli. This includes a validator against the AsyncAPI spec, templated generators, version conversion, spec optimizer, bundler, etc.
For example, to validate a yaml spec file:
```
asyncapi validate --file your-asyncapi-file.yaml
```

View File

@@ -0,0 +1,6 @@
# Editors
Aside from Swaggerhub editing capabilities, other alternative editor options are available:
* AsyncAPI Studio: A web-based editor designed specifically for creating and validating AsyncAPI documents.
* Visual Studio Code: VS Code can be extended with plugins like "AsyncAPI for VS Code" to provide AsyncAPI-specific features, for editing AsyncAPI files.

View File

@@ -0,0 +1,9 @@
# Generators
These tools are capable of generate a variety of outputs from any valid AsyncAPI spec, including:
* API documentation in various formats like HTML, Markdown, or OpenAPI
* Code samples in various programming languages like Python, Java, and Node.js based on your API definition.&#x20;
* Functionally complete applications
There is an official generator tool which can be checked here: [https://www.asyncapi.com/docs/tools/](https://www.asyncapi.com/docs/tools/)generator.

View File

@@ -0,0 +1,25 @@
# Kafka Asynchronous Guidelines
There are several technologies to implement event-driven architectures, but this section is going to focus on the predominant technology on this subject : Apache Kafka.
**Apache Kafka** can be considered as a Streaming Platform which relies on the several concepts:
* Super high-performance, scalable, highly-available cluster of brokers
* Availability
* Replication of partitions across different brokers
* Scalability
* Partitions
* Ability to re-balance partitions across consumers automatically when adding/removing them
* Performance
* Partitioned, re-playable log (collection of messages appended sequentially to a file)
* Data copied directly from disk buffer to network buffer (zero copy) without even being imported to the JVM
* Extreme throughput by using the concept of consumer group
* Security
* Secure encrypted connections using TLS client certificates
* Multi-tenant management through quotas/ACLs
* Client APIs on different programming languages : Go, Scala, Python, REST, JAVA, ...
* Stream processing APIs like Kafka Streams
* Ecosystem of connectors to pull/push data from/to Kafka
* Clean-up processes for storage optimization
* Retention periods
* Compacted topics

View File

@@ -0,0 +1,24 @@
# Introduction
This section is specific to the definition of API specs with [AsyncAPI](https://www.asyncapi.com/) for Kafka protocol.
Also, take into account that across the section there will be multiple references to this [AsyncAPI reference specification](https://design.api.3stripes.io/apis/adidas/asyncapi-adoption-initiative/1.0.0) which is publicly available for reference.&#x20;
#### Kafka to AsyncAPI concept mapping
| Kafka Concept | AsyncAPI Concept |
| ------------- | ---------------- |
| broker | server |
| topic | channel |
| consumer | subscriber |
| producer | publisher |
#### First level items in AsyncAPI structure
| Element | Meaning |
| ---------- | -------------------------------------------------------------------------- |
| asyncapi | Specifies the AsyncAPI specification version |
| info | Provides metadata about the API such as the version, title and description |
| servers | Describes servers where the API is available |
| channels | Defines the channels through which messages are received/published |
| components | Reusable elements to be references across the spec |

View File

@@ -0,0 +1,30 @@
# Why AsyncAPI?
Event-driven architectures are becoming increasingly popular for building scalable, responsive, and efficient applications. AsyncAPI plays a crucial role in this landscape by offering a standardized way to describe asynchronous APIs, similar to how OpenAPI does for REST APIs. AsyncAPI seeks to make the development, maintenance, and testing of asynchronous APIs easier by providing a machine-readable specification.
It supports various messaging protocols, including MQTT, Web Socket, Kafka, AMQP, and more, making it versatile for different use cases. In adidas, AsyncAPI is used mainly to document Kafka resources created across the company in the scope of the Streaming Platform, but nothing prevents you from using it for a different purpose.
The benefits of using AsyncAPI are, amongst others:
* Standardization
* AsyncAPI defines a STANDARD format (YAML or JSON) for describing asynchronous APIs.
* By defining the structure of messages, channels, and events, you can ensure that all components adhere to the same conventions.
* Using a single standard ensures consistency in the design and documentation of all your asynchronous APIs.
* This simplifies integration, maintenance, and troubleshooting across different parts of your system.
* Improved Developer Experience
* AsyncAPI documents the messages being exchanged, their structure, and the events triggered by them.
* It provides developers with a clear picture of how to interact with the API, what data to expect, and how to interpret responses without digging into the implementation details.&#x20;
* Code scaffolding
* Using tools like asyncapi-generator allow to easily generate the skeleton of applications that can work with the resources described in the spec.
* This can be done in different programming languages (Python, Java, Node.js. ...), reducing significantly the development time and the coding errors.
* Design-first approach: It encourages designing the API first before writing code, leading to better planned and more reliable APIs.
In addition to those benefits, Platform & Engineering is working hard to create a data catalogue built upon AsyncAPI that allows to have a good level of discoverability, allowing teams to be able to find exactly the data they need with regards to any data object in the company.
Questions like:
* Who is responsible for a specific data object
* Where is that data hosted
* Which kind of information is available
Will be easy to answer once this catalogue is in place. Also, it is important to have a good discoverability and search & filtering capabilities.

View File

@@ -0,0 +1,10 @@
# AsyncAPI Version
Any version of AsyncAPI **MAY** be used for spec definitions.
However, to be aligned with adidas tooling, spec versions **SHOULD** be _v2.6.0_, because to the date of this document creation (April 2024) this is the highest supported version on Swaggerhub, the current API portal to render, discover and publish specs.
```yaml
asyncapi: 2.6.0
...
```

View File

@@ -1,14 +1,10 @@
# adidas Asynchronous API guidelines
# Internal vs Public Specifications
## AsyncAPI guidelines for Kafka
AsyncAPI specs **MAY** be created both for public APIs or for internal APIs.&#x20;
### Internal vs public specs
* Public APIs are those who are created to be consumed by others
* Internal APIs are only for development teams for a particular project
AsyncAPI specs **MAY** be created both for public APIs or for internal APIs. 
There are no differences with regards to the spec definition, but internal APIs **SHOULD** have restricted access limited only to the internal development team for a particular project or product.
- Public APIs are those who are created to be consumed by others
- Internal APIs are only for development teams for a particular project
There are no differences with regards to the spec definition, but internal APIs **SHOULD** have restricted access limited only to the internal development team for a particular project or product.
This access control is handled through Role-Based Access Control (RBAC) implemented in Swaggerhub.
This access control is handled through Role-Based Access Control (RBAC) implemented in Swaggerhub.

View File

@@ -1,11 +1,7 @@
# adidas Asynchronous API guidelines
# Specification Granularity
## AsyncAPI guidelines for Kafka
### Spec granularity
In adidas all resources are grouped by namespace.
In adidas all resources are grouped by namespace.
For that reason specs **SHOULD** be created with a relation 1:1 with namespaces. In other words, every namespace will have an AsyncAPI spec including all the assets belonging to that namespace.
Different granularities **MAY** be chosen depending on the needs. 
Different granularities **MAY** be chosen depending on the needs.&#x20;

View File

@@ -0,0 +1,3 @@
# Meaningful Descriptions
All fields included in the specs **MUST** include a proper description.&#x20;

View File

@@ -0,0 +1,3 @@
# Automatic Schema Registration
Applications **MUST NOT** enable automatic registration of schemas because in adidas schemas have a separate lifecycle, intended to be independent from API contract and API implementing code.

View File

@@ -0,0 +1,51 @@
# Schema Data Evolution
All asynchronous APIs **SHOULD** leverage Schema Registry to ensure consistency across consumers/producers with regards to message structure and ensuring compatibility across different versions.
The default compatibility mode **SHOULD** be FULL\_TRANSITIVE, which is the default compatibility mode in adidas for Schema Registry. Check the sections below to know more about compatibility modes.
#### Compatibility modes
Once a given schema is defined, it is unavoidable that the schema evolves with time. Every time this happens, downstream consumers need to be able to handle data with both old and new schemas seamlessly.
Each new schema version is validated according to the configuration before being created as a new version. Namely, it is checked against the configured compatibility types.
**Important** The mere fact of enabling Schema Registry is not enough to ensure that there are no compatibility issues in a given integration. The right compatibility mode needs also to be selected and enforced.
As a summary, the available compatibility types are listed below:
| Mode | Description |
| -------------------- | --------------------------------------------------------------------------- |
| BACKWARD | new schema versions are backward compatible with older versions |
| BACKWARD\_TRANSITIVE | backward compatibility across all schema versions, not just the latest one. |
| FORWARD | new schema versions are compatible with older consumer versions |
| FORWARD\_TRANSITIVE | forward compatibility across all schema versions. |
| FULL | both backward and forward compatibility with the latest schema version |
| FULL\_TRANSITIVE | both backward and forward compatibility with all schema versions |
| NONE | schema compatibility checks are disabled |
#### Upgrading process of clients based on compatibility
Depending on the compatibility mode, the process of upgrading producers/consumers will be different based on the compatibility mode enabled.
* NONE
* As there are no compatibility checks, no order will grant a smooth transition
* In most of the cases this lead to having to create a new topic for this evolution
* BACKWARD / BACKWARD\_TRANSITIVE
* Consumers **MUST** be upgraded first before producing new data
* No forward compatibility, meaning that there's no guarantee that the consumers with older schemas are going to be able to read data produced with a new version
* FORWARD / FORWARD\_TRANSITIVE
* Producers **MUST** be upgraded first and then after ensuring that no older data is present, upgrade the consumers
* No backward compatibility, meaning that there's no guarantee that the consumers with newer schemas are going to be able to read data produced with an older version
* FULL / FULL TRANSITIVE
* No restrictions on the order, anything will work
#### How to deal with breaking changes
If for any reason you need to use a less strict compatibility mode in a topic, or you can't avoid breaking changes in a given situation, the compatibility mode **SHOULD NOT** be modified on the same topic.
Instead, a new topic **SHOULD** be used to avoid unexpected behaviors or broken integrations. This allows a smooth transitioning from clients to the definitive topic, and once all clients are migrated the original one can be decommissioned.
Alternatively, instead of modifying existing fields it **MAY** be considered as an sub-optimal approach to add the changes in new fields and have both coexisting. Take into account that this pollutes your topic and it can cause some confusion.

View File

@@ -0,0 +1,15 @@
# Backward Compatibility
There are two variants here:
* BACKWARD - Consumers using a new version (X) of a schema can read data produced by the previous version (X - 1)
* BACKWARD\_TRANSITIVE - Consumers using a new version (X) of a schema can read data produced by any previous version (X - 1, X - 2, ....)
The operations that preserve backward compatibility are:
* Delete fields
* Consumers with the newer version will just ignore the non-existing fields
* Add optional fields (with default values)
* Consumers will set the default value for the missing fields in their schema version
<figure><img src="../../../.gitbook/assets/spaces_PQHX3w20BF4lnkckLJzC_uploads_git-blob-17547da367c50d28e6996ea1d3fab4d10625765b_sr_backward_compatibility.png" alt=""><figcaption></figcaption></figure>

View File

@@ -0,0 +1,15 @@
# Forward Compatibility
Also two variants here:
* FORWARD - Consumers with previous version of the schema (X - 1) can read data produced by Producers with a new schema version (X)
* FORWARD\_TRANSITIVE - Consumers with any previous version of the schema (X - 1, X - 2, ...) can read data produced by Producers with a new schema version (X)
The operations that preserve forward compatibility are:
* Adding a new field
* Consumers will ignore the fields that are not defined in their schema version
* Delete optional fields (with default values)
* Consumers will use the default value for the missing fields defined in their schema version
<figure><img src="../../../.gitbook/assets/spaces_PQHX3w20BF4lnkckLJzC_uploads_git-blob-8139077e602e7c36543a4977fcb8655313e8f1b9_sr_forward_compat.png" alt=""><figcaption></figcaption></figure>

View File

@@ -0,0 +1,15 @@
# Full Compatibility
This is a combination of both compatibility types (backward and forward). It also has 2 variants:
* FULL - Backward and forward compatible between schemas X and X - 1.
* FULL\_TRANSITIVE - Backward and forward compatible between schemas X and all previous ones (X - 1, X - 2, ...)
**Important** Once more, FULL\_TRANSITIVE is the default compatibility mode in adidas, it is set at cluster level and all new schemas will inherit it
This mode is preserved only if using the following operations
* Adding optional fields (with default values)
* Delete optional fields (with default values)
<figure><img src="../../../.gitbook/assets/spaces_PQHX3w20BF4lnkckLJzC_uploads_git-blob-effb69104100a69f6101daee3cb01a88dded13d4_sr_full_compat.png" alt=""><figcaption></figcaption></figure>

View File

@@ -0,0 +1,3 @@
# Self-Contained Specifications
All AsyncAPI specification **SHOULD** include as much information as needed in order to make the spec self-contained and clearly documented

View File

@@ -1,12 +1,8 @@
# adidas Asynchronous API guidelines
# Contact Information
## AsyncAPI guidelines for Kafka
AsyncAPI specs **MUST** include at least one main contact under the info.contact section.
### Contact information
AsyncAPI specs **MUST** include at least one main contact under the info.contact section.
The spec only allows to include one contact there, but it **MAY** also include additional contacts using extension fields. In case this is done, it **MUST** use the extension field *x-additional-responsibles*.
The spec only allows to include one contact there, but it **MAY** also include additional contacts using extension fields. In case this is done, it **MUST** use the extension field _x-additional-responsibles_.
For example:
@@ -21,4 +17,4 @@ info:
- person2@adidas.com
- person3@adidas.com
- person4@adidas.com
```
```

View File

@@ -0,0 +1,7 @@
# Key/Value Format
Kafka messages **MAY** include a key, which needs to be properly designed to have a good balance of data across partitions.
The message key and the payload (often called value) can be serialized independently and can have different formats. For example, the value of the message can be sent in AVRO format, while the message key can be a primitive type (string).&#x20;
Message keys **SHOULD** be kept as simple as possible and use a primitive type when possible.

View File

@@ -1,8 +1,4 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### AsyncAPI ID
# AsyncAPI ID
According to [AsyncAPI documentation](https://v2.asyncapi.com/docs/reference/specification/v2.6.0#A2SIdString), every AsyncAPI spec **SHOULD** use a unique identifier for the application being defined, following RFC-3986.
@@ -12,4 +8,4 @@ More concretely, ASyncAPI specs created in adidas should use the following patte
...
id: urn:fdp:adidas:com:namespace:asyncapi_reference_spec
...
```
```

View File

@@ -1,9 +1,5 @@
# adidas Asynchronous API guidelines
# Message Headers
## Asynchronous API guidelines
In addition to the key and value, a Kafka message **MAY** include _**headers**_, which allow to extend the information sent with some metadata as needed (for example, source of the data, routing or tracing information or any relevant information that could be useful without having to parse the message).
### Message headers
In addition to the key and value, a Kafka message **MAY** include ***headers***, which allow to extend the information sent with some metadata as needed (for example, source of the data, routing or tracing information or any relevant information that could be useful without having to parse the message).
Headers are just an ordered collection of key/value pairs, being the key a String and the value a serialized Object, the same as the message value itself.
Headers are just an ordered collection of key/value pairs, being the key a String and the value a serialized Object, the same as the message value itself.

View File

@@ -1,16 +1,12 @@
# adidas Asynchronous API guidelines
# Servers
## AsyncAPI guidelines for Kafka
All AsyncAPI specs **MUST** include a servers section including references to the right Kafka clusters, defined and maintained globally and made available through domains in Swaggerhub.
### Servers
Those definitions are handled in Swaggerhub as reusable domains publicly available:
All AsyncAPI specs **MUST** include a servers section including references to the right Kafka clusters, defined and maintained globally and made available through domains in Swaggerhub.
[https://design.api.3stripes.io/domains/adidas/asyncapi\_adoption\_commons/1.0.0](https://design.api.3stripes.io/domains/adidas/asyncapi\_adoption\_commons/1.0.0)
Those definitions are handled in Swaggerhub as reusable domains publicly available:
https://design.api.3stripes.io/domains/adidas/asyncapi_adoption_commons/1.0.0
that can be referred from any spec, picking the right kafka servers as required (see example below).
that can be referred from any spec, picking the right Kafka servers as required (see example below).
```yaml
...
@@ -23,4 +19,5 @@ servers:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalPro
...
```
**Important note** Don't forget to include '*/v1/*' in the URL of the domain
**Important note** Don't forget to include '_/v1/_' in the URL of the domain

View File

@@ -1,33 +1,29 @@
# adidas Asynchronous API guidelines
# Channels
## AsyncAPI guidelines for Kafka
All AsyncAPI specs **MUST** include definitions for the channels (Kafka topics) including:
### Channels
* Description of the topic
* Servers in which the topic is available
* This is a reference to one of the server identifiers included in the servers section
* publish/subscribe operations
* Operation ID
* Summary or short description for the operation
* Description for the operation
* Security schemes
* Tags
* External Docs
* Message details
All AsyncAPI specs **MUST** include definitions for the channels (kafka topics) including:
In addition to those supported fields, it **MAY** be possible to use extension attributes (using the x- prefix) to specify specific configuration parameters and metadata. In so, the recommended attributes to use are :
- Description of the topic
- Servers in which the topic is available
- This is a reference to one of the server identifiers included in the servers section
- publish/subscribe operations
- Operation ID
- Summary or short description for the operation
- Description for the operation
- Security schemes
- Tags
- External Docs
- Message details
In addition to those supported fields, it **MAY** be possible to use extension attributes (using the x- prefix) to specify specific configuration parameters and metadata. In so, the recommended attributes to use are :
- x-metadata
- To include additional configuration specific to your team or project
- x-configurations
- To include Kafka configuration parameters and producers/consumers
* x-metadata
* To include additional configuration specific to your team or project
* x-configurations
* To include Kafka configuration parameters and producers/consumers
As the parameters can be different per environment, it is very convenient to add an additional level for the environment
As part of the publish/subscribe operations, the spec **SHOULD** specify the different kafka clients currently consuming from the different topics for each cluster/environment. For this, the extended attributes x-producers and x-consumers will be used.
As part of the publish/subscribe operations, the spec **SHOULD** specify the different Kafka clients currently consuming from the different topics for each cluster/environment. For this, the extended attributes x-producers and x-consumers will be used.
```yaml
...
@@ -89,4 +85,4 @@ channels:
pivotal.pro:
- consumer3
...
```
```

View File

@@ -1,12 +1,8 @@
# adidas Asynchronous API guidelines
# Schemas
## AsyncAPI guidelines for Kafka
Kafka messages **SHOULD** use schemas (AVRO, JSON, Protobuf) registered in the Schema Registry to ensure compatibility between producers/consumers.
### Schemas
Kafka messages **SHOULD** use schemas (AVRO, Json, Protobuf) registered in the Schema Registry to ensure compatibility between producers/consumers.
If so, always refer to the schema definitions directly in the schema registry instead of duplicating the schema definitions inline. This is to avoid double maintenance. 
If so, always refer to the schema definitions directly in the schema registry instead of duplicating the schema definitions inline. This is to avoid double maintenance.&#x20;
An example directly taken from reference spec is shown below
@@ -32,4 +28,4 @@ components:
$ref: '#/components/schemas/topic1SchemaValue'
```
**Important note** The used schema is a very simple one, it is only used to illustrate how to refer to it.
**Important note** The used schema is a very simple one, it is only used to illustrate how to refer to it.

View File

@@ -1,10 +1,6 @@
# adidas Asynchronous API guidelines
# Security Schemes
## AsyncAPI guidelines for Kafka
### Security Schemes
Specs **MAY** use security schemas to reflect the fact that the kafka servers use mTLS. It is something quite static at the moment so the recommendation is reuse the ones specified in the reference spec.
Specs **MAY** use security schemas to reflect the fact that the Kafka servers use mTLS. It is something quite static at the moment so the recommendation is reuse the ones specified in the reference spec.
```yaml
channels:
@@ -22,4 +18,4 @@ components:
type: X509
producerAcl:
type: X509
```
```

View File

@@ -0,0 +1,10 @@
# External Docs
The external docs **SHOULD** be used to refer to LeanIX fact-sheet associated to the spec.
```yaml
...
externalDocs:
description: LeanIX
url: https://adidas.leanix.net/adidasProduction/factsheet/Application/467ff391-876c-49ad-93bf-facafffc0178
```

View File

@@ -2,9 +2,8 @@
Everyone **MUST** follow the **API First** principle.
The API first principle is an extension of contract-first principle. Therefore, a development of an API **MUST** always start with API design without any upfront coding activities.
The API first principle is an extension of design-first principle. Therefore, a development of an API **MUST** always start with API design without any upfront coding activities.
**API design** \(e.g., description, schema\) **is the master of truth, not the API implementation.**
**API design** (e.g., description, schema) **is the master of truth, not the API implementation.**
API implementation **MUST** always be compliant to particular API design which represents the [contract](contract.md) between API, and it's consumer.

View File

@@ -0,0 +1,5 @@
# Immutability
After agreement with the stakeholders the contract **MUST** be published in the API registry in order to make it (that version) immutable.&#x20;
API registry acts as a central location for storing and accessing of all published APIs.

View File

@@ -0,0 +1,14 @@
# Common Data Types
The API types **MUST** adhere to the formats defined below:
| Data type | Standard | Example |
| -------------- | ------------------------------------------------------------------------ | ----------------------------------------- |
| Date and Time | [ISO 8601](https://en.wikipedia.org/wiki/ISO\_8601) | 2017-06-21T14:07:17Z (Always use UTC) |
| Date | [ISO 8601](https://en.wikipedia.org/wiki/ISO\_8601) | 2017-06-21 |
| Duration | [ISO 8601](https://en.wikipedia.org/wiki/ISO\_8601) | P3Y6M4DT12H30M5S |
| Time interval | [ISO 8601](https://en.wikipedia.org/wiki/ISO\_8601) | 2007-03-01T13:00:00Z/2008-05-11T15:30:00Z |
| Timestamps | [ISO 8601](https://en.wikipedia.org/wiki/ISO\_8601) | 2017-01-01T12:00:00Z |
| Language Codes | [ISO 639](https://en.wikipedia.org/wiki/List\_of\_ISO\_639-1\_codes) | en <-> English |
| Country Code | [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO\_3166-1\_alpha-2) | DE <-> Germany |
| Currency | [ISO 4217](https://en.wikipedia.org/wiki/ISO\_4217) | EUR <-> Euro |

View File

@@ -2,5 +2,13 @@
## adidas General API Guidelines
Set of general rules and recommendations that have to be followed along the entire API lifecycle of any API regardless of its type.
In the scope of a company, organization or team, the document known as guidelines is a set of best practices or hints provided by a group of subject matter experts in that particular technology. The most important aspects of it:
* Help to create a standardized way of completing specific tasks, making outcome more predictable and alike
* Help to identify do's and dont's with regards to a specific technology or tool
* Help to avoid gotchas or problems related with company specifics
* Gather the knowledge of several Subject Matter Experts and prevent others for falling in frequent caveats or mistakes
This is a set of general rules and recommendations that have to be followed along the entire API lifecycle of any API regardless of its type.
**Note** In any case, the content in the guidelines should be taken as recommendations, not something that needs to be done in a mandatory way.

View File

@@ -0,0 +1,5 @@
# Tooling
The current platform available in adidas to design, host, and render APIs specifications is [Swaggerhub](https://design.api.3stripes.io/).
Every API specification **MUST** be hosted in Swaggerhub under the _adidas_ organization.

View File

@@ -1,4 +1,3 @@
# Core Principles
This section outlines the foundation upon which the API Guidelines are built.
# Core REST Principles
This section outlines the foundation upon which the REST API Guidelines are built.

View File

@@ -9,9 +9,6 @@
* [SwaggerHub Documentation](https://app.swaggerhub.com/help/index)
## Learning Path (adidas-Udemy)
* [Basic](https://adidas-itlearning.udemy.com/course/onboarding-16-api-development-management/)
* [Deep Dive](https://adidas-itlearning.udemy.com/course/onboarding-21-api-development-management/)
* [OpenAPI](https://adidas-itlearning.udemy.com/course/openapi-beginner-to-guru/learn/)
* [Get Started with Kong Multi-Cluster](https://adidas-itlearning.udemy.com/course/get-started-kong/learn/lecture/43319098#search)