Recently, I ran into a nasty production issue with the Azure Tables managed connector in Logic Apps that’s easy to miss and hard to diagnose. Once an Azure Storage Table grows beyond a certain size (in my case around 1.2 GB, i.e., over a million entities), queries that do NOT include a PartitionKey start returning an empty array — without any error — even though the same query returns results in Storage Explorer.
If you've ever tried to provision a Logic App (Consumption) end‑to‑end with an API connection for Azure Tables wired to a managed identity, you probably noticed the documentation is – let's say – thin. You quickly end up reverse engineering exported ARM templates or clicking in the portal to see what gets generated. This post documents a repeatable Bicep approach (user‑assigned managed identity + Azure Tables connection) and why (for now) AI is still of limited help for these integration cases.
Introduction It's undeniable AI is all over the place and it feels like everybody is jumping on the AI train right now. Every day a new astonishing example of the capabilities of AI is demonstrated. How can I utilize the power of AI? It's a fact AI will change the way we work, so I'd better learn how to benefit from it.
To be honest, I really like what AI can do for me.
In a previous blog post I already briefly touched on the validate-content policy. However, that wasn't the main topic at the time; the focus was more on the performance and capacity impact of using this specific policy.
Recently I was tasked with setting up policy fragments to apply content validation on incoming messages in API Management. The policy itself seems quite straight forward, but I did run into something unexpected which I think is worth a blog post.
In every scenario where you store data, you need to make sure your data is as clean and relevant as possible. Especially when storing data in the cloud, where every byte stored and transferred costs money. Not only storing data has it's price tag, also having irrelevant/old/obsolete data will impact the overall performance so you might need to buy more capacity to keep the performance you need.
So for several reasons it's important to keep your data clean and tidy.
Sometimes you need to receive and process messages from a 3rd party supplier, but:
the supplier expects an HTTP endpoint to send the messages to you want to decouple receiving from processing, because you want asynchronous processing In this scenario the standard approach would be to use a queue, that can be either an Azure Service bus queue or an Azure Storage Account queue. For this use case I'm using a Storage account queue.
When you work with Azure API Management on a regular basis, you probably are familiar with policies. Policies allow you to perform actions or adjustments on the incoming request before it's sent to the backend API, or adjust the response before returning to the caller.
Policies can be applied on various levels, so called scopes, and each lower level can inherit the policy of a higher level.
Global level => executed for all APIs Product level => executed for all APIs under a product API level => executed for all operations under an API Operation level => executed for this single operation Maintenance and reuse issues The main problems with policies always have been maintenance and reuse.