Recently, I ran into a nasty production issue with the Azure Tables managed connector in Logic Apps that’s easy to miss and hard to diagnose. Once an Azure Storage Table grows beyond a certain size (in my case around 1.2 GB, i.e., over a million entities), queries that do NOT include a PartitionKey start returning an empty array — without any error — even though the same query returns results in Storage Explorer. Worse, after this threshold is reached in one table, other tables in the same storage account (even with only a handful of rows) also fail to return results for non‑key queries through the connector.
As it’s common to use Table Storage as temporary storage during Logic App processing or as permanent storage for logging progress or status, this issue can (and did for me) break production.
This post documents the symptoms, what I think is going on, and practical workarounds until this is addressed.
What breaks exactly
- Using the Azure Tables connector actions (e.g., “List entities” / “Get entities” in Logic Apps) with a filter like
RowKey eq 'abc'orCustomField eq 'xyz'returns[]and statusSucceededin the run history. No error, no continuation token, just an empty array. - The exact same filter in Azure Storage Explorer returns the expected rows.
- If you include a PartitionKey in the filter, results come back as expected in Logic Apps. Example:
PartitionKey eq 'orders' AND RowKey eq 'abc'works. - After the large table crosses a size threshold, other small tables in the same storage account also start exhibiting the same behavior (non‑key queries return empty results) via the connector.
Hypothesis: REST path vs SDK path
The managed connector issues REST/OData calls to Table Storage and must handle continuation tokens (NextPartitionKey/NextRowKey) and cross‑partition scans when the filter doesn’t include PartitionKey. Storage Explorer and the official SDKs typically iterate continuation tokens for you and apply server‑side and client‑side filtering correctly. My working theory is that once the table is very large, the connector’s REST query path for non‑key filters either:
- Short‑circuits across partitions and returns no page(s) when the first segment doesn’t match; or
- Fails to follow continuation tokens under certain server responses; or
- Applies the filter only to the first segment/partition and stops.
None of this throws an error; the action still returns 200 and an empty array, which makes it look like “no data.”
Why this hurts
- Silent failure: the action succeeds with empty output. Unless you cross‑check with Explorer/SDK, you won’t notice data is missing.
- Account‑wide blast radius: once one table crosses the size threshold, other tables in the same account also stop returning non‑key queries through the connector.
- Hidden performance assumption: Table Storage is designed around PartitionKey‑based access. That’s fine, but the connector should at least return an error or partial results with a warning — not an empty array.
How to reproduce (simplified)
- Grow a table to ~1.2 GB+ and several million entities across many partitions.
- In a Logic App, add the Azure Tables “List entities” action with a filter that only uses RowKey or another property (no PartitionKey).
- Observe the run: the action completes successfully with
[]. - Run the same
$filterin Storage Explorer or via the Azure.Data.Tables SDK: entities are returned. - Create a tiny second table in the same account and query it by a non‑key field via the connector: also returns []. You can verify this by first populating the small table, run the test, and then populate the large table to contain over 1.2Gb of data
Workarounds you can use today
Short‑term, you have a few options. Pick based on your design constraints.
- Always include PartitionKey in filters. If your access pattern needs non‑key lookups, consider storing a “searchable” PartitionKey (e.g., a denormalized index: PartitionKey =
<CustomerId>, RowKey =<OrderId>). This obviously is the best and fastest approach, but not always possible. - Move the large table to a separate storage account so it won’t impact smaller (processing) tables.
- Fail fast on empty arrays. If business logic expects matches, treat an empty array as a potential fault and branch to an error path with logging.
Closing thoughts
It’s understandable that cross‑partition scans on very large tables are expensive and should be discouraged. But returning “no data” with 200 OK is dangerous — it hides data loss behind a “success” status, and it appears to affect other small tables in the same account via the managed connector. If your workloads depend on non‑key lookups, implement one of the workarounds above and consider introducing index tables or moving those specific queries into a Function.
If you’ve hit the same problem or found an official fix or regression note, I’d love to hear it. In the meantime, separate tables into individual storage accounts, design for PartitionKey‑based access wherever possible, and add monitoring for non‑key queries.