Sometimes you need to receive and process messages from a 3rd party supplier, but: the supplier expects an HTTP endpoint to send the messages to you want to decouple receiving from processing, because you want asynchronous processing In this scenario the standard approach would be to use a queue, that can be either an Azure Service bus queue or an Azure Storage Account queue. For this use case I'm using a Storage account queue.
When you work with Azure API Management on a regular basis, you probably are familiar with policies. Policies allow you to perform actions or adjustments on the incoming request before it's sent to the backend API, or adjust the response before returning to the caller. Policies can be applied on various levels, so called scopes, and each lower level can inherit the policy of a higher level. Global level => executed for all APIs Product level => executed for all APIs under a product API level => executed for all operations under an API Operation level => executed for this single operation Maintenance and reuse issues The main problems with policies always have been maintenance and reuse.
Azure service bus is one of the services Microsoft has identified as part of the Integration Services and is an important component in messaging solutions. It can be interacted with using a variety of methods, like via the SDK or a REST endpoint. One of the other key integration services is Azure API management (APIM), and its used for centralizing endpoint management. In an integration landscape, also service bus endpoints should be exposed via API management so we can provide a consistent way of accessing endpoints to clients.
Azure API Management (APIM) is one of the main integration components in the API driven world today. It's a plaform for abstracting API details from client applications, making them more resilient to change. APIM in it's most basic form is just passing on the request from client to API, but in many cases something needs to be done on APIM level to validate, handle, adjust or fix something in the flow.
I've been working with Azure Cosmos Db since it still was called Document Db. However, being an integration consultant it never was a real focus area, but on my current contract it's one of the main components of the backend. Starting from scratch, all the way to go-live, I've learned a lot and also found out a few decisions will make or break your solution. In this blog post I'd like to share what I've learned.
There are a couple of ways to implement infrastructure as code and create Azure resource in an automated way. From Azure point of view, the most commonly known way is using ARM templates. Everybody who has worked with ARM templates knows that they're are complex to work with and impossible to debug, especially for larger deployments. Microsoft tried to improve the experience, by providing validation tools, but working with complex JSON structures remains like working with YAML, it's really hard.
It's a best practice to store secrets in Azure Key Vault, and when you need them in an Azure API Management policy, we use managed identities. Accessing Key Vault to read the secret can be simply done with this piece of policy: <send-request mode="new" response-variable-name="keyvaultResponse" timeout="20" ignore-error="false"> <set-url>https://didago-kv.vault.azure.net/secrets/my-secret/d24b7ce4e3a54343b9cf0da3b6bfe156/?api-version=7.0</set-url> <set-method>GET</set-method> <authentication-managed-identity resource="https://vault.azure.net" /> </send-request> The response contains the secret and can be read and stored in a local variable like this: