Changed structure, splitting content into several files

This commit is contained in:
Juan Manuel Pérez
2024-05-08 16:57:21 +02:00
parent eac1ded8d1
commit 8a5586b256
30 changed files with 559 additions and 382 deletions

View File

@@ -67,10 +67,34 @@
## Asynchronous API Guidelines ## Asynchronous API Guidelines
* [Introduction to guidelines](asynchronous-api-guidelines/01_introduction/a_introduction.md) * [Introduction to guidelines](asynchronous-api-guidelines/01_introduction/a_introduction.md)
* [Basic concepts](asynchronous-api-guidelines/01_introduction/b_basic_concepts.md) * Basic Concepts
* [General guidelines for asynchronous APIs](asynchronous-api-guidelines/02_asynchronous_api_guidelines/main.md) * [Event Driven Architectures](asynchronous-api-guidelines/01_introduction/b_basic_concepts_edas.md)
* [Introduction to AsyncAPI specs for Kafka](asynchronous-api-guidelines/03_asyncapi_kafka_specs/a_introduction.md) * [Basic terminology](asynchronous-api-guidelines/01_introduction/c_basic_concepts_terminology.md)
* [Guidelines for AsyncAPI specs for Kafka](asynchronous-api-guidelines/03_asyncapi_kafka_specs/b_guidelines.md) * [Events](asynchronous-api-guidelines/01_introduction/d_basic_concepts_events.md)
* [Tooling for AsyncAPI](asynchronous-api-guidelines/03_asyncapi_kafka_specs/c_tooling.md) * Asynchronous API Guidelines
* [Contract](asynchronous-api-guidelines/02_asynchronous_api_guidelines/a_contract.md)
* [API First](asynchronous-api-guidelines/02_asynchronous_api_guidelines/b_api_first.md)
* [Immutability](asynchronous-api-guidelines/02_asynchronous_api_guidelines/c_immutability.md)
* [Common Data Types](asynchronous-api-guidelines/02_asynchronous_api_guidelines/d_data_types.md)
* [Automatic Schema Registration](asynchronous-api-guidelines/02_asynchronous_api_guidelines/e_schema_registration.md)
* [Schema Data Evolution](asynchronous-api-guidelines/02_asynchronous_api_guidelines/f_schema_data_evolution.md)
* [Key/Value format](asynchronous-api-guidelines/02_asynchronous_api_guidelines/g_key_value_format.md)
* [Message Headers](asynchronous-api-guidelines/02_asynchronous_api_guidelines/h_message_headers.md)
* [Naming Conventions](asynchronous-api-guidelines/02_asynchronous_api_guidelines/i_naming_conventions.md)
* [Protocols](asynchronous-api-guidelines/02_asynchronous_api_guidelines/j_protocols.md)
* [Security](asynchronous-api-guidelines/02_asynchronous_api_guidelines/k_security.md)
* AsyncAPI specs for Kafka
* [Introduction](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [AsyncAPI version](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Internal vs Public specs](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Spec granularity](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Meaningful descriptions](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Self-contained specs](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Contact Information](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [AsyncAPI ID](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Servers](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Channels](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Schemas](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Security Schemes](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [External Docs](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Tooling](asynchronous-api-guidelines/03_asyncapi_kafka_specs)

View File

@@ -139,38 +139,4 @@ However, different bounded contexts don't share the same model and if they need
#### Stream processing #### Stream processing
It can be understood as the capability of processing data directly as it is produced or received (hence, in real-time or near to real-time). It can be understood as the capability of processing data directly as it is produced or received (hence, in real-time or near to real-time).
[Review]
A message carries information from one application to another, while an event is a message that provides details of something that has already occurred. One important aspect to note is that depending on the type of information a message contains, it can fall under an event, query, or command.
Overall, events are messages but not all messages are events.
### Using events in an EDA
There are several ways to use events in a EDA:
- Events as notifications
- Events to replicate data
#### Events as notifications
When a system uses events as notifications it becomes a pluggable system. The producers have no knowledge about the consumers and they don't really care about them, instead every consumer can decide if it is interested in the information included in the event.
This way, the number of consumers can be increased (or reduced) without changing anything on the producer side.
This pluggability becomes increasily important as systems get more complex.
#### Events to replicate data
When events are used to replicate data across services, they include all the necessary information for the target system to keep it locally so that it can be queried with no external interactions.
This is usually called event-carried state transfer which in the end is a form of data integration.
The benefits are similar to the ones implied by the usage of a cache system
- Better isolation and autonomy, as the data stays under service's control
- Faster data access, as the data is local (particularly important when combining data from different services in different geographies)
- Offline data availability

View File

@@ -0,0 +1,72 @@
# adidas Asynchronous API guidelines
## Basic concepts about asynchronous APIs
### Basic terminology
#### Events
An event is both a fact and a notification, something that already happened in the real world.
- No expectation on any future action
- Includes information about a status change that just happened
- Travels in one direction and it never expects a response (fire and forget)
- Very useful when...
- Loose coupling is important
- When the same piece of information is used by several services
- When data needs to be replicated across application
A message in general is any interaction between an emitter and a receiver to exchange information. This implies that any event can be considered a messages but not the other way around.
#### Commands
A command is a special type of message which represents just an action, something that will change the state of a given system.
- Typically synchronous
- There is a clear expectation about a state change that needs to take place in the future
- When returning a response indicate completion
- Optionally they can include a result in the response
- Very common to see them in orchestration components
#### Query
It is a special type of message which represents a request to look something up.
- They are always free of side effects (leaves the system unchanged)
- They always require a response (with the requested data)
#### Coupling
The term coupling can be understood as the impact that a change in one component will have on other components. In the end, it is related to the amount of things that a given component shares with others. The more is shared, the more tight is the coupling.
**Note** A tighter coupling is not necessarily a bad thing, it depends on the situation. It will be necessary to assess the tradeoff between provide as much information as possible and to avoid having to change several components as a result of something changing in other component.
The coupling of a single component is actually a function of these factors:
- Information exposed (Interface surface area)
- Number of users
- Operational stability and performance
- Frequency of change
Messaging helps bulding loosely coupled services because it moves pure data from a highly coupled location (the source) and puts it into a loosely coupled location (the subscriber).
Any operations that need to be performed on the data are done in each subscriber and never at the source. This way, messaging technologies (like Kafka) take most of the operational issues off the table.
All business systems in larger organizations need a base level of essential data coupling. In other words, functional couplings are optional, but core data couplings are essential.
#### Bounded context
A bounded context is a small group of services that share the same domain model, are usually deployed together and collaborate closely.
It is possible to put an analogy here with a hierarchic organization inside a company :
- Different departments are loosely coupled
- Inside departments there will be a lot more interactions across services and the coupling will be tighter
One of the big ideas of Domain-Driven Design (DDD) was to create boundaries around areas of a business domain and model them separately. So within the same bounded context the domain model is shared and everything is available for everyone there.
However, different bounded contexts don't share the same model and if they need to interact they will do it through more restricted interfaces.
#### Stream processing
It can be understood as the capability of processing data directly as it is produced or received (hence, in real-time or near to real-time).

View File

@@ -0,0 +1,31 @@
# adidas Asynchronous API guidelines
## Basic concepts about asynchronous APIs
### Using events in an EDA
There are several ways to use events in a EDA:
- Events as notifications
- Events to replicate data
#### Events as notifications
When a system uses events as notifications it becomes a pluggable system. The producers have no knowledge about the consumers and they don't really care about them, instead every consumer can decide if it is interested in the information included in the event.
This way, the number of consumers can be increased (or reduced) without changing anything on the producer side.
This pluggability becomes increasily important as systems get more complex.
#### Events to replicate data
When events are used to replicate data across services, they include all the necessary information for the target system to keep it locally so that it can be queried with no external interactions.
This is usually called event-carried state transfer which in the end is a form of data integration.
The benefits are similar to the ones implied by the usage of a cache system
- Better isolation and autonomy, as the data stays under service's control
- Faster data access, as the data is local (particularly important when combining data from different services in different geographies)
- Offline data availability

View File

@@ -0,0 +1,9 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Contract
The definition of an asynchronous API **MUST** represent a contract between API owners and the stakeholders.
That contract **MUST** contain enough information to use the API (servers, URIs, credentials, contact information, etc) and to identify which kind of information is being exchanged there.

View File

@@ -0,0 +1,10 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### API First
Asynchronous APIs **SHOULD** use the API First principle :
- The API designs **SHOULD** involve all relevant stakeholders (developers, consumers, ...) to ensure that final design fulfil requirements from different perspectives
- The resulting API specification will be the source of truth rather than the API implementation

View File

@@ -0,0 +1,9 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Immutability
After agreement with the stakeholders the contract **MUST** be published in order to do it immutable. Changes to the API related to the data model, **MUST** be published in a schema registry.
Schema registry acts as a central location for storing and accessing the schemas of all published APIs.

View File

@@ -0,0 +1,18 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Common data types
The API types **MUST** adhere to the formats defined below:
| Data type | Standard | Example |
| --------- | -------- | ------- |
| Date and Time | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-06-21T14:07:17Z (Always use UTC) |
| Date | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-06-21 |
| Duration | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | P3Y6M4DT12H30M5S |
| Time interval | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2007-03-01T13:00:00Z/2008-05-11T15:30:00Z |
| Timestamps | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-01-01T12:00:00Z |
| Language Codes | [ISO 639](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) | en <-> English |
| Country Code | [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) | DE <-> Germany |
| Currency | [ISO 4217](https://en.wikipedia.org/wiki/ISO_4217) | EUR <-> Euro |

View File

@@ -0,0 +1,7 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Automatic schema registration
Applications **MUST NOT** enable automatic registration of schemas because FDP's operational model for the Schema Registry relies on GitOps (every operation is done through GIT PRs + automated pipelines)

View File

@@ -2,46 +2,6 @@
## Asynchronous API guidelines ## Asynchronous API guidelines
This document is biased towards Kafka, which is the technology used in adidas for building Event Driven Architectures.
### Contract
The definition of an asynchronous API **MUST** represent a contract between API owners and the stakeholders.
That contract **MUST** contain enough information to use the API (servers, URIs, credentials, contact information, etc) and to identify which kind of information is being exchanged there.
### API First
Asynchronous APIs **SHOULD** use the API First principle :
- The API designs **SHOULD** involve all relevant stakeholders (developers, consumers, ...) to ensure that final design fulfil requirements from different perspectives
- The resulting API specification will be the source of truth rather than the API implementation
### Immutability
After agreement with the stakeholders the contract **MUST** be published in order to do it immutable. Changes to the API related to the data model, **MUST** be published in a schema registry.
Schema registry acts as a central location for storing and accessing the schemas of all published APIs.
### Common data types
The API types **MUST** adhere to the formats defined below:
| Data type | Standard | Example |
| --------- | -------- | ------- |
| Date and Time | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-06-21T14:07:17Z (Always use UTC) |
| Date | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-06-21 |
| Duration | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | P3Y6M4DT12H30M5S |
| Time interval | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2007-03-01T13:00:00Z/2008-05-11T15:30:00Z |
| Timestamps | [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) | 2017-01-01T12:00:00Z |
| Language Codes | [ISO 639](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) | en <-> English |
| Country Code | [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) | DE <-> Germany |
| Currency | [ISO 4217](https://en.wikipedia.org/wiki/ISO_4217) | EUR <-> Euro |
### Automatic schema registration
Applications **MUST NOT** enable automatic registration of schemas because FDP's operational model for the Schema Registry relies on GitOps (every operation is done through GIT PRs + automated pipelines)
### Schemas and data evolution ### Schemas and data evolution
All asynchronous APIs **SHOULD** leverage Schema Registry to ensure consistency across consumers/producers with regards to message structure and ensuring compatibility across different versions. All asynchronous APIs **SHOULD** leverage Schema Registry to ensure consistency across consumers/producers with regards to message structure and ensuring compatibility across different versions.
@@ -139,44 +99,4 @@ If for any reason you need to use a less strict compatibility mode in a topic, o
Instead, a new topic **SHOULD** be used to avoid unexpected behaviors or broken integrations. This allows a smooth transitioning from clients to the definitive topic, and once all clients are migrated the original one can be decommissioned. Instead, a new topic **SHOULD** be used to avoid unexpected behaviors or broken integrations. This allows a smooth transitioning from clients to the definitive topic, and once all clients are migrated the original one can be decommissioned.
Alternatively, instead of modifying existing fields it **MAY** be considered as an suboptimal approach to add the changes in new fields and have both coexisting. Take into account that this pollutes your topic and it can cause some confusion. Alternatively, instead of modifying existing fields it **MAY** be considered as an suboptimal approach to add the changes in new fields and have both coexisting. Take into account that this pollutes your topic and it can cause some confusion.
### Key/Value message format
Kafka messages **MAY** include a key, which needs to be properly designed to have a good balance of data across partitions.
The message key and the payload (often called value) can be serialized independently and can have different formats. For example, the value of the message can be sent in AVRO format, while the message key can be a primitive type (string). 
Message keys **SHOULD** be kept as simple as possible and use a primitive type when possible.
### Message headers
In addition to the key and value, a Kafka message **MAY** include ***headers***, which allow to extend the information sent with some metadata as needed (for example, source of the data, routing or tracing information or any relevant information that could be useful without having to parse the message).
Headers are just an ordered collection of key/value pairs, being the key a String and the value a serialized Object, the same as the message value itself.
### Naming conventions
As general naming conventions, asynchronous APIs **MUST** adhere to the following conventions
- Use of english
- Avoid acronyms or explain them when used
- Use camelCase unless stated otherwise
### Protocols
Protocols define how clients and servers communicate in an asynchronous architecture.
The accepted protocols for asynchronous APIs are:
- Kafka
- HTTPs
- WebSockets
- MQTT
This version of the guidelines focuses on Kafka protocol, but it could be extended in the future. In any case, this document will be updated to reflect the state of the art.
### Security
The [security guidelines](https://github.com/adidas/api-guidelines/blob/feature/asyncapi-guidelines/general-guidelines/security.md) for regular APIs **MUST** be followed strictly when applicable.

View File

@@ -0,0 +1,11 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Key/Value message format
Kafka messages **MAY** include a key, which needs to be properly designed to have a good balance of data across partitions.
The message key and the payload (often called value) can be serialized independently and can have different formats. For example, the value of the message can be sent in AVRO format, while the message key can be a primitive type (string). 
Message keys **SHOULD** be kept as simple as possible and use a primitive type when possible.

View File

@@ -0,0 +1,9 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Message headers
In addition to the key and value, a Kafka message **MAY** include ***headers***, which allow to extend the information sent with some metadata as needed (for example, source of the data, routing or tracing information or any relevant information that could be useful without having to parse the message).
Headers are just an ordered collection of key/value pairs, being the key a String and the value a serialized Object, the same as the message value itself.

View File

@@ -0,0 +1,11 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Naming conventions
As general naming conventions, asynchronous APIs **MUST** adhere to the following conventions
- Use of english
- Avoid acronyms or explain them when used
- Use camelCase unless stated otherwise

View File

@@ -0,0 +1,16 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Protocols
Protocols define how clients and servers communicate in an asynchronous architecture.
The accepted protocols for asynchronous APIs are:
- Kafka
- HTTPs
- WebSockets
- MQTT
This version of the guidelines focuses on Kafka protocol, but it could be extended in the future. In any case, this document will be updated to reflect the state of the art.

View File

@@ -0,0 +1,7 @@
# adidas Asynchronous API guidelines
## Asynchronous API guidelines
### Security
The [security guidelines](https://github.com/adidas/api-guidelines/blob/feature/asyncapi-guidelines/general-guidelines/security.md) for regular APIs **MUST** be followed strictly when applicable.

View File

@@ -0,0 +1,14 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### AsyncAPI version
Any version of AsyncAPI **MAY** be used for spec definitions.
However, to be aligned with adidas tooling, spec versions **SHOULD** be *v2.6.0*, because to the date of this document creation (April 2024) this is the highest supported version on Swaggerhub, the current API portal to render, discover and publish specs.
```yaml
asyncapi: 2.6.0
...
```

View File

@@ -1,253 +0,0 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### AsyncAPI version
Any version of AsyncAPI **MAY** be used for spec definitions.
However, to be aligned with adidas tooling, spec versions **SHOULD** be *v2.6.0*, because to the date of this document creation (April 2024) this is the highest supported version on Swaggerhub, the current API portal to render, discover and publish specs.
```yaml
asyncapi: 2.6.0
...
```
### Internal vs public specs
AsyncAPI specs **MAY** be created both for public APIs or for internal APIs. 
- Public APIs are those who are created to be consumed by others
- Internal APIs are only for development teams for a particular project
There are no differences with regards to the spec definition, but internal APIs **SHOULD** have restricted access limited only to the internal development team for a particular project or product.
This access control is handled through Role-Based Access Control (RBAC) implemented in Swaggerhub.
### Spec granularity
In Fast Data Platform (FDP) all resources are grouped by namespace.
For that reason specs **SHOULD** be created with a relation 1:1 with namespaces. In other words, every namespace will have an AsyncAPI spec including all the assets belonging to that namespace.
Different granularities **MAY** be chosen depending on the needs. 
### Meaningful descriptions
All fields included in the specs **MUST** include a proper description. 
### Self-contained specs
All AsyncAPI specs **SHOULD** include as much information as needed in order to make the spec self-contained and clearly documented
### Contact information
AsyncAPI specs **MUST** include at least one main contact under the info.contact section.
The spec only allows to include one contact there, but it **MAY** also include additional contacts using extension fields. In case this is done, it **MUST** use the extension field *x-additional-responsibles*.
For example:
```yaml
...
info:
...
contact:
name: "Main point of contact"
email: "team_dl@adidas.com"
x-additional-responsibles:
- person2@adidas.com
- person3@adidas.com
- person4@adidas.com
```
### AsyncAPI ID
According to [AsyncAPI documentation](https://v2.asyncapi.com/docs/reference/specification/v2.6.0#A2SIdString), every AsyncAPI spec **SHOULD** use a unique identifier for the application being defined, following RFC-3986.
More concretely, ASyncAPI specs created in adidas should use the following pattern
```yaml
...
id: urn:fdp:adidas:com:namespace:asyncapi_reference_spec
...
```
### Servers
All AsyncAPI specs **MUST** include a servers section including references to the right Kafka clusters, defined and maintained by FDP team and made available through domains in Swaggerhub.
Those definitions are handled in Swaggerhub as reusable domains publicly available:
https://design.api.3stripes.io/domains/adidas/asyncapi_adoption_commons/1.0.0
that can be referred from any spec, picking the right kafka servers as required (see example below).
```yaml
...
servers:
pivotalDev:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalDev
pivotalSit:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalSit
pivotalPro:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalPro
...
```
**Important note** Don't forget to include '*/v1/*' in the URL of the domain
### Channels
All AsyncAPI specs **MUST** include definitions for the channels (kafka topics) including:
- Description of the topic
- Servers in which the topic is available
- This is a reference to one of the server identifiers included in the servers section
- publish/subscribe operations
- Operation ID
- Summary or short description for the operation
- Description for the operation
- Security schemes
- Tags
- External Docs
- Message details
In addition to those supported fields, it **MAY** be possible to use extension attributes (using the x- prefix) to specify specific configuration parameters and metadata. In so, the recommended attributes to use are :
- x-metadata
- To include additional configuration specific to your team or project
- x-configurations
- To include Kafka configuration parameters and producers/consumers
As the parameters can be different per environment, it is very convenient to add an additional level for the environment
As part of the publish/subscribe operations, the spec **SHOULD** specify the different kafka clients currently consuming from the different topics for each cluster/environment. For this, the extended attributes x-producers and x-consumers will be used.
```yaml
...
channels:
namespace.source.event.topic-name:
description: A description of the purpose of the topic and the contained information
servers: ["pivotalDev", "pivotalSit", "pivotalPro"]
x-metadata:
myField1: myValue1
myField2: myValue2
x-configurations:
pivotal.dev:
kafka:
partitions: 12
replicas: 1
topicConfiguration:
min.insync.replicas: "1"
retention.ms: "2592000000"
pivotal.sit:
kafka:
partitions: 12
replicas: 2
topicConfiguration:
min.insync.replicas: "1"
retention.ms: "2592000000"
    publish:
operationId: "producer"
summary: "Description for the operation"
description: "An extensive explanation about the operation"
security:
- producerAcl: []
tags:
- name: tagA
- name: tagB
x-producers:
pivotal.dev:
- producer1
- producer2
pivotal.sit:
- producer1
- producer2
pivotal.pro:
- producer3
- producer4
externalDocs:
description: documentation
url: http://confluence.adidas.fdp/catalogue/myTopic
...
subscribe:
operationId: "consumer"
...
x-consumers:
pivotal.dev:
- consumer1
- consumer2
pivotal.sit:
- consumer1
- consumer2
pivotal.pro:
- consumer3
...
```
### Schemas
Kafka messages **SHOULD** use schemas (AVRO, Json, Protobuf) registered in the Schema Registry to ensure compatibility between producers/consumers.
If so, always refer to the schema definitions directly in the schema registry instead of duplicating the schema definitions inline. This is to avoid double maintenance. 
An example directly taken from reference spec is shown below
```yaml
...
channels:
namespace.source.event.topic-name:
...
publish:
...
message:
$ref: '#/components/messages/topic1Payload'
components:
...
schemas:
...
topic1SchemaValue:
schemaFormat: 'application/vnd.apache.avro;version=1.9.0'
payload:
$ref: https://sit-fdp-pivotal-schema-registry.api.3stripes.io/subjects/pea_fd_fdp.sample.test-value/versions/latest/schema
messages:
topic1Payload:
$ref: '#/components/schemas/topic1SchemaValue'
```
**Important note** The used schema is a very simple one, it is only used to illustrate how to refer to it.
### Security Schemes
Specs **MAY** use security schemas to reflect the fact that the kafka servers use mTLS. It is something quite static at the moment so the recommendation is reuse the ones specified in the reference spec.
```yaml
channels:
namespace.source.event.topic-name:
...
publish:
...
security:
- producerAcl: []
...
components:
securitySchemes:
...
consumerAcl:
type: X509
producerAcl:
type: X509
```
### External docs
The external docs **SHOULD** be used to refer to LeanIX factsheet associated to the spec.
```yaml
...
externalDocs:
description: LeanIX
url: https://adidas.leanix.net/adidasProduction/factsheet/Application/467ff391-876c-49ad-93bf-facafffc0178
```

View File

@@ -0,0 +1,14 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Internal vs public specs
AsyncAPI specs **MAY** be created both for public APIs or for internal APIs. 
- Public APIs are those who are created to be consumed by others
- Internal APIs are only for development teams for a particular project
There are no differences with regards to the spec definition, but internal APIs **SHOULD** have restricted access limited only to the internal development team for a particular project or product.
This access control is handled through Role-Based Access Control (RBAC) implemented in Swaggerhub.

View File

@@ -0,0 +1,11 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Spec granularity
In Fast Data Platform (FDP) all resources are grouped by namespace.
For that reason specs **SHOULD** be created with a relation 1:1 with namespaces. In other words, every namespace will have an AsyncAPI spec including all the assets belonging to that namespace.
Different granularities **MAY** be chosen depending on the needs. 

View File

@@ -0,0 +1,7 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Meaningful descriptions
All fields included in the specs **MUST** include a proper description. 

View File

@@ -0,0 +1,7 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Self-contained specs
All AsyncAPI specs **SHOULD** include as much information as needed in order to make the spec self-contained and clearly documented

View File

@@ -0,0 +1,24 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Contact information
AsyncAPI specs **MUST** include at least one main contact under the info.contact section.
The spec only allows to include one contact there, but it **MAY** also include additional contacts using extension fields. In case this is done, it **MUST** use the extension field *x-additional-responsibles*.
For example:
```yaml
...
info:
...
contact:
name: "Main point of contact"
email: "team_dl@adidas.com"
x-additional-responsibles:
- person2@adidas.com
- person3@adidas.com
- person4@adidas.com
```

View File

@@ -0,0 +1,15 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### AsyncAPI ID
According to [AsyncAPI documentation](https://v2.asyncapi.com/docs/reference/specification/v2.6.0#A2SIdString), every AsyncAPI spec **SHOULD** use a unique identifier for the application being defined, following RFC-3986.
More concretely, ASyncAPI specs created in adidas should use the following pattern
```yaml
...
id: urn:fdp:adidas:com:namespace:asyncapi_reference_spec
...
```

View File

@@ -0,0 +1,26 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Servers
All AsyncAPI specs **MUST** include a servers section including references to the right Kafka clusters, defined and maintained by FDP team and made available through domains in Swaggerhub.
Those definitions are handled in Swaggerhub as reusable domains publicly available:
https://design.api.3stripes.io/domains/adidas/asyncapi_adoption_commons/1.0.0
that can be referred from any spec, picking the right kafka servers as required (see example below).
```yaml
...
servers:
pivotalDev:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalDev
pivotalSit:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalSit
pivotalPro:
$ref: https://design.api.3stripes.io/v1/domains/adidas/asyncapi_adoption_commons/1.0.0#/components/servers/pivotalPro
...
```
**Important note** Don't forget to include '*/v1/*' in the URL of the domain

View File

@@ -0,0 +1,92 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Channels
All AsyncAPI specs **MUST** include definitions for the channels (kafka topics) including:
- Description of the topic
- Servers in which the topic is available
- This is a reference to one of the server identifiers included in the servers section
- publish/subscribe operations
- Operation ID
- Summary or short description for the operation
- Description for the operation
- Security schemes
- Tags
- External Docs
- Message details
In addition to those supported fields, it **MAY** be possible to use extension attributes (using the x- prefix) to specify specific configuration parameters and metadata. In so, the recommended attributes to use are :
- x-metadata
- To include additional configuration specific to your team or project
- x-configurations
- To include Kafka configuration parameters and producers/consumers
As the parameters can be different per environment, it is very convenient to add an additional level for the environment
As part of the publish/subscribe operations, the spec **SHOULD** specify the different kafka clients currently consuming from the different topics for each cluster/environment. For this, the extended attributes x-producers and x-consumers will be used.
```yaml
...
channels:
namespace.source.event.topic-name:
description: A description of the purpose of the topic and the contained information
servers: ["pivotalDev", "pivotalSit", "pivotalPro"]
x-metadata:
myField1: myValue1
myField2: myValue2
x-configurations:
pivotal.dev:
kafka:
partitions: 12
replicas: 1
topicConfiguration:
min.insync.replicas: "1"
retention.ms: "2592000000"
pivotal.sit:
kafka:
partitions: 12
replicas: 2
topicConfiguration:
min.insync.replicas: "1"
retention.ms: "2592000000"
    publish:
operationId: "producer"
summary: "Description for the operation"
description: "An extensive explanation about the operation"
security:
- producerAcl: []
tags:
- name: tagA
- name: tagB
x-producers:
pivotal.dev:
- producer1
- producer2
pivotal.sit:
- producer1
- producer2
pivotal.pro:
- producer3
- producer4
externalDocs:
description: documentation
url: http://confluence.adidas.fdp/catalogue/myTopic
...
subscribe:
operationId: "consumer"
...
x-consumers:
pivotal.dev:
- consumer1
- consumer2
pivotal.sit:
- consumer1
- consumer2
pivotal.pro:
- consumer3
...
```

View File

@@ -0,0 +1,35 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Schemas
Kafka messages **SHOULD** use schemas (AVRO, Json, Protobuf) registered in the Schema Registry to ensure compatibility between producers/consumers.
If so, always refer to the schema definitions directly in the schema registry instead of duplicating the schema definitions inline. This is to avoid double maintenance. 
An example directly taken from reference spec is shown below
```yaml
...
channels:
namespace.source.event.topic-name:
...
publish:
...
message:
$ref: '#/components/messages/topic1Payload'
components:
...
schemas:
...
topic1SchemaValue:
schemaFormat: 'application/vnd.apache.avro;version=1.9.0'
payload:
$ref: https://sit-fdp-pivotal-schema-registry.api.3stripes.io/subjects/pea_fd_fdp.sample.test-value/versions/latest/schema
messages:
topic1Payload:
$ref: '#/components/schemas/topic1SchemaValue'
```
**Important note** The used schema is a very simple one, it is only used to illustrate how to refer to it.

View File

@@ -0,0 +1,25 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### Security Schemes
Specs **MAY** use security schemas to reflect the fact that the kafka servers use mTLS. It is something quite static at the moment so the recommendation is reuse the ones specified in the reference spec.
```yaml
channels:
namespace.source.event.topic-name:
...
publish:
...
security:
- producerAcl: []
...
components:
securitySchemes:
...
consumerAcl:
type: X509
producerAcl:
type: X509
```

View File

@@ -0,0 +1,14 @@
# adidas Asynchronous API guidelines
## AsyncAPI guidelines for Kafka
### External docs
The external docs **SHOULD** be used to refer to LeanIX factsheet associated to the spec.
```yaml
...
externalDocs:
description: LeanIX
url: https://adidas.leanix.net/adidasProduction/factsheet/Application/467ff391-876c-49ad-93bf-facafffc0178
```

View File

@@ -1,8 +1,34 @@
## Asynchronous API Guidelines ## Asynchronous API Guidelines
* [Introduction to guidelines](01_introduction/a_introduction.md) * [Introduction to guidelines](asynchronous-api-guidelines/01_introduction/a_introduction.md)
* [Basic concepts](01_introduction/b_basic_concepts.md) * Basic Concepts
* [General guidelines for asynchronous APIs](02_asynchronous_api_guidelines/main.md) * [Event Driven Architectures](asynchronous-api-guidelines/01_introduction/b_basic_concepts_edas.md)
* [Introduction to AsyncAPI specs for Kafka](03_asyncapi_kafka_specs/a_introduction.md) * [Basic terminology](asynchronous-api-guidelines/01_introduction/c_basic_concepts_terminology.md)
* [Guidelines for AsyncAPI specs for Kafka](03_asyncapi_kafka_specs/b_guidelines.md) * [Events](asynchronous-api-guidelines/01_introduction/d_basic_concepts_events.md)
* [Tooling for AsyncAPI](03_asyncapi_kafka_specs/c_tooling.md) * Asynchronous API Guidelines
* [Contract](asynchronous-api-guidelines/02_asynchronous_api_guidelines/a_contract.md)
* [API First](asynchronous-api-guidelines/02_asynchronous_api_guidelines/b_api_first.md)
* [Immutability](asynchronous-api-guidelines/02_asynchronous_api_guidelines/c_immutability.md)
* [Common Data Types](asynchronous-api-guidelines/02_asynchronous_api_guidelines/d_data_types.md)
* [Automatic Schema Registration](asynchronous-api-guidelines/02_asynchronous_api_guidelines/e_schema_registration.md)
* [Schema Data Evolution](asynchronous-api-guidelines/02_asynchronous_api_guidelines/f_schema_data_evolution.md)
* [Key/Value format](asynchronous-api-guidelines/02_asynchronous_api_guidelines/g_key_value_format.md)
* [Message Headers](asynchronous-api-guidelines/02_asynchronous_api_guidelines/h_message_headers.md)
* [Naming Conventions](asynchronous-api-guidelines/02_asynchronous_api_guidelines/i_naming_conventions.md)
* [Protocols](asynchronous-api-guidelines/02_asynchronous_api_guidelines/j_protocols.md)
* [Security](asynchronous-api-guidelines/02_asynchronous_api_guidelines/k_security.md)
* AsyncAPI specs for Kafka
* [Introduction](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [AsyncAPI version](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Internal vs Public specs](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Spec granularity](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Meaningful descriptions](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Self-contained specs](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Contact Information](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [AsyncAPI ID](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Servers](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Channels](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Schemas](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Security Schemes](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [External Docs](asynchronous-api-guidelines/03_asyncapi_kafka_specs)
* [Tooling](asynchronous-api-guidelines/03_asyncapi_kafka_specs)