无邪的大熊猫 · iOS 秒数转换成时间,时,分,秒 - ...· 1 年前 · |
发财的蚂蚁 · python ...· 1 年前 · |
发呆的拖把 · Android设备通过数据线进行相互通信 ...· 1 年前 · |
另类的砖头 · Django ORM – 多表实例 | 菜鸟教程· 1 年前 · |
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Download Microsoft Edge More info about Internet Explorer and Microsoft EdgeThis document lists some of the most common Microsoft Azure limits, which are also sometimes called quotas.
To learn more about Azure pricing, see Azure pricing overview . There, you can estimate your costs by using the pricing calculator . You also can go to the pricing details page for a particular service, for example, Windows VMs . For tips to help manage your costs, see Prevent unexpected costs with Azure billing and cost management .
Some services have adjustable limits.
When a service doesn't have adjustable limits, the following tables use the header Limit . In those cases, the default and the maximum limits are the same.
When the limit can be adjusted, the tables include Default limit and Maximum limit headers. The limit can be raised above the default limit but not above the maximum limit.
If you want to raise the limit or quota above the default limit, open an online customer support request at no charge .
The terms soft limit and hard limit often are used informally to describe the current, adjustable limit (soft limit) and the maximum limit (hard limit). If a limit isn't adjustable, there won't be a soft limit, only a hard limit.
Free Trial subscriptions aren't eligible for limit or quota increases. If you have a Free Trial subscription , you can upgrade to a Pay-As-You-Go subscription. For more information, see Upgrade your Azure Free Trial subscription to a Pay-As-You-Go subscription and the Free Trial subscription FAQ .
Some limits are managed at a regional level.
Let's use vCPU quotas as an example. To request a quota increase with support for vCPUs, you must decide how many vCPUs you want to use in which regions. You then request an increase in vCPU quotas for the amounts and regions that you want. If you need to use 30 vCPUs in West Europe to run your application there, you specifically request 30 vCPUs in West Europe. Your vCPU quota isn't increased in any other region--only West Europe has the 30-vCPU quota.
As a result, decide what your quotas must be for your workload in any one region. Then request that amount in each region into which you want to deploy. For help in how to determine your current quotas for specific regions, see Resolve errors for resource quotas .
For limits on resource names, see Naming rules and restrictions for Azure resources .
For information about Resource Manager API read and write limits, see Throttling Resource Manager requests .
The following limits apply to management groups .
Resource Limit1 The 6 levels don't include the subscription level.
2 If you reach the limit of 800 deployments, delete deployments from the history that are no longer needed. To delete management group level deployments, use Remove-AzManagementGroupDeployment or az deployment mg delete .
The following limits apply when you use Azure Resource Manager and Azure resource groups.
Resource Limit1 You can apply up to 50 tags directly to a subscription. However, the subscription can contain an unlimited number of tags that are applied to resource groups and resources within the subscription. The number of tags per resource or resource group is limited to 50.
2 Resource Manager returns a list of tag name and values in the subscription only when the number of unique tags is 80,000 or less. A unique tag is defined by the combination of resource ID, tag name, and tag value. For example, two resources with the same tag name and value would be calculated as two unique tags. You still can find a resource by tag when the number exceeds 80,000.
3 Deployments are automatically deleted from the history as you near the limit. For more information, see Automatic deletions from deployment history .
1 Deployments are automatically deleted from the history as you near the limit. Deleting an entry from the deployment history doesn't affect the deployed resources. For more information, see Automatic deletions from deployment history .
You can exceed some template limits by using a nested template. For more information, see Use linked templates when you deploy Azure resources . To reduce the number of parameters, variables, or outputs, you can combine several values into an object. For more information, see Objects as parameters .
You may get an error with a template or parameter file of less than 4 MB, if the total size of the request is too large. For more information about how to simplify your template to avoid a large request, see Resolve errors for job size exceeded .
Here are the usage constraints and other service limits for the Azure AD service.
Category Limit Tenants
1
Scaling limits depend on the pricing tier. For details on the pricing tiers and their scaling limits, see
API Management pricing
.
2
Per unit cache size depends on the pricing tier. To see the pricing tiers and their scaling limits, see
API Management pricing
.
3
Connections are pooled and reused unless explicitly closed by the back end.
4
This limit is per unit of the Basic, Standard, and Premium tiers. The Developer tier is limited to 1,024. This limit doesn't apply to the Consumption tier.
5
This limit applies to the Basic, Standard, and Premium tiers. In the Consumption tier, policy document size is limited to 16 KiB.
6
Multiple custom domains are supported in the Developer and Premium tiers only.
7
CA certificates are not supported in the Consumption tier.
8
This limit applies to the Consumption tier only. There are no specific limits in other tiers but depends on service infrastructure, policy configuration, number of concurrent requests, and other factors.
9
Applies to the Consumption tier only. Includes an up to 2048-bytes long query string.
10
To increase this limit, contact
support
.
11
Self-hosted gateways are supported in the Developer and Premium tiers only. The limit applies to the number of
self-hosted gateway resources
. To raise this limit contact
support
. Note, that the number of nodes (or replicas) associated with a self-hosted gateway resource is unlimited in the Premium tier and capped at a single node in the Developer tier.
12
This limit does not apply to Developer tier. In the Developer tier, the limit is 2,500.
1 Apps and storage quotas are per App Service plan unless noted otherwise.
2 The actual number of apps that you can host on these machines depends on the activity of the apps, the size of the machine instances, and the corresponding resource utilization.
3 Dedicated instances can be of different sizes. For more information, see App Service pricing .
4 More are allowed upon request.
5 The storage limit is the total content size across all apps in the same App service plan. The total content size of all apps across all App service plans in a single resource group and region cannot exceed 500 GB. The file system quota for App Service hosted apps is determined by the aggregate of App Service plans created in a region and resource group.
6 These resources are constrained by physical resources on the dedicated instances (the instance size and the number of instances).
7
If you scale an app in the Basic tier to two instances, you have 350 concurrent connections for each of the two instances. For Standard tier and above, there are no theoretical limits to web sockets, but other factors can limit the number of web sockets. For example, maximum concurrent requests allowed (defined by
maxConcurrentRequestsPerCpu
) are: 7,500 per small VM, 15,000 per medium VM (7,500 x 2 cores), and 75,000 per large VM (18,750 x 4 cores).
8 The maximum IP connections are per instance and depend on the instance size: 1,920 per B1/S1/P1V3 instance, 3,968 per B2/S2/P2V3 instance, 8,064 per B3/S3/P3V3 instance.
9 App Service Isolated SKUs can be internally load balanced (ILB) with Azure Load Balancer, so there's no public connectivity from the internet. As a result, some features of an ILB Isolated App Service must be used from machines that have direct access to the ILB network endpoint.
10 Run custom executables and/or scripts on demand, on a schedule, or continuously as a background task within your App Service instance. Always On is required for continuous WebJobs execution. There's no predefined limit on the number of WebJobs that can run in an App Service instance. There are practical limits that depend on what the application code is trying to do.
11 Only issuing standard certificates (wildcard certificates aren't available). Limited to only one free certificate per custom domain.
12 Total storage usage across all apps deployed in a single App Service Environment (regardless of how they're allocated across different resource groups).
1 A sandbox is a shared environment that can be used by multiple jobs. Jobs that use the same sandbox are bound by the resource limitations of the sandbox.
The following table shows the tracked item limits per machine for change tracking.
Resource Limit Notes Configuration store requests for Free tier 1,000 requests per day Once the quota is exhausted, HTTP status code 429 will be returned for all requests until the end of the day Configuration store requests for Standard tier 30,000 per hour Once the quota is exhausted, requests may return HTTP status code 429 indicating Too Many Requests - until the end of the hour Storage for Free tier 10 MB Storage for Standard tier Keys and Values 10 KB For a single key-value item, including all metadataAzure Cache for Redis limits and sizes are different for each pricing tier. To see the pricing tiers and their associated sizes, see Azure Cache for Redis pricing .
For more information on Azure Cache for Redis configuration limits, see Default Redis server configuration .
Because configuration and management of Azure Cache for Redis instances is done by Microsoft, not all Redis commands are supported in Azure Cache for Redis. For more information, see Redis commands not supported in Azure Cache for Redis .
1 Each Azure Cloud Service with web or worker roles can have two deployments, one for production and one for staging. This limit refers to the number of distinct roles, that is, configuration. This limit doesn't refer to the number of instances per role, that is, scaling.
Pricing tiers determine the capacity and limits of your search service. Tiers include:
Limits per subscription
You can create multiple services, limited only by the number of services allowed at each tier. For example, you could create up to 16 services at the Basic tier and another 16 services at the S1 tier within the same subscription. For more information about tiers, see Choose an SKU or tier for Azure Cognitive Search .
Maximum service limits can be raised upon request. If you need more services within the same subscription, file a support request .
Resource Free 1 Basic S3 HD1 Free is based on infrastructure that's shared with other customers. Because the hardware isn't dedicated, scale-up isn't supported on the free tier.
2 Search units are billing units, allocated as either a replica or a partition . You need both resources for storage, indexing, and query operations. To learn more about SU computations, see Scale resource levels for query and index workloads .
Limits per search service
A search service is constrained by disk space or by a hard limit on the maximum number of indexes or indexers, whichever comes first. The following table documents storage limits. For maximum object limits, see Limits by resource .
Resource Basic 1 S3 HD1 Basic has one fixed partition. Additional search units can be used to add replicas for larger query volumes.
2 Service level agreements are in effect for billable services on dedicated resources. Free services and preview features have no SLA. For billable services, SLAs take effect when you provision sufficient redundancy for your service. Two or more replicas are required for query (read) SLAs. Three or more replicas are required for query and indexing (read-write) SLAs. The number of partitions isn't an SLA consideration.
To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see Service limits in Azure Cognitive Search .
The following limits are for the number of Cognitive Services resources per Azure subscription. There is a limit of only one allowed 'Free' account, per Cognitive Service type, per subscription. Each of the Cognitive Services may have other limitations, for more information, see Azure Cognitive Services .
Limit Example A mixture of Cognitive Services resources Maximum of 200 total Cognitive Services resources per region. 100 Computer Vision resources in West US, 50 Speech Service resources in West US, and 50 Text Analytics resources in West US. A single type of Cognitive Services resources. Maximum of 100 resources per region 100 Computer Vision resources in West US 2, and 100 Computer Vision resources in East US.Some of the following default limits and quotas can be increased. To request a change, create a change request stating the limit you want to change.
The following restrictions apply to all Azure Communications Gateways:
Azure Communications Gateway also has limits on the SIP signaling.
Resource LimitSome endpoints might add parameters in the following headers to an in-dialog message when those parameters weren't present in the dialog-creating message. In that case, Azure Communications Gateway will strip them, because this behavior isn't permitted by RFC 3261.
For Azure Container Apps limits, see Quotas in Azure Container Apps .
For Azure Cosmos DB limits, see Limits in Azure Cosmos DB .
The following table describes the maximum limits for Azure Data Explorer clusters.
Resource Limit Number of follower clusters (data share consumers) per leader cluster (data share producer)The following table describes the limits on management operations performed on Azure Data Explorer clusters.
Scope Operation LimitFor Azure Database for MySQL limits, see Limitations in Azure Database for MySQL .
For Azure Database for PostgreSQL limits, see Limitations in Azure Database for PostgreSQL .
To learn more about the limits for Azure Files and File Sync, see Azure Files scalability and performance targets .
1
By default, the timeout for the Functions 1.x runtime in an App Service plan is unbounded.
2
Requires the App Service plan be set to
Always On
. Pay at standard
rates
.
3
These limits are
set in the host
.
4
The actual number of function apps that you can host depends on the activity of the apps, the size of the machine instances, and the corresponding resource utilization.
5
The storage limit is the total content size in temporary storage across all apps in the same App Service plan. Consumption plan uses Azure Files for temporary storage.
6
When your function app is hosted in a
Consumption plan
, only the CNAME option is supported. For function apps in a
Premium plan
or an
App Service plan
, you can map a custom domain using either a CNAME or an A record.
7
Guaranteed for up to 60 minutes.
8
Workers are roles that host customer apps. Workers are available in three fixed sizes: One vCPU/3.5 GB RAM; Two vCPU/7 GB RAM; Four vCPU/14 GB RAM.
9
See
App Service limits
for details.
10
Including the production slot.
11
There's currently a limit of 5000 function apps in a given subscription.
For more information, see Functions Hosting plans comparison .
Health Data Services is a set of managed API services based on open standards and frameworks. Health Data Services enables workflows to improve healthcare and offers scalable and secure healthcare solutions. Health Data Services includes Fast Healthcare Interoperability Resources (FHIR) service, the Digital Imaging and Communications in Medicine (DICOM) service, and MedTech service.
FHIR service is an implementation of the FHIR specification within Health Data Services. It enables you to combine in a single workspace one or more FHIR service instances with optional DICOM and MedTech service instances. Azure API for FHIR is generally available as a stand-alone service offering.
FHIR service in Azure Health Data Services has a limit of 4 TB for structured storage.
Quota Name Default Limit Maximum Limit Notes 100,000 RUs Contact support Maximum available is 1,000,000. You need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. Concurrent connections 15 concurrent connections on two instances (for a total of 30 concurrent requests) Contact support Azure API for FHIR Service Instances per Subscription Contact support Maximum clusters per subscription 5000The following limits are for the number of Azure Lab Services resources.
For more information about Azure Lab Services capacity limits, see Capacity limits in Azure Lab Services .
Contact support to request an increase your limit.
For Azure Load Testing limits, see Service limits in Azure Load Testing .
The latest values for Azure Machine Learning Compute quotas can be found in the Azure Machine Learning quota page
The following table shows the usage limit for the Azure Maps S0 pricing tier. Usage limit depends on the pricing tier.
Resource S0 pricing tier limitFor more information on the Azure Maps pricing tiers, see Azure Maps pricing .
For Azure Monitor limits, see Azure Monitor service limits .
Azure Data Factory is a multitenant service that has the following default limits in place to make sure customer subscriptions are protected from each other's workloads. To raise the limits up to the maximum for your subscription, contact support.
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration units (version 2) . For information on billing, see Azure Data Factory pricing .
2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress costs.
Region group Regions Region group 1 Central US, East US, East US 2, North Europe, West Europe, West US, West US 2 Region group 2 Australia East, Australia Southeast, Brazil South, Central India, Japan East, North Central US, South Central US, Southeast Asia, West Central US Region group 3 Other regionsIf managed virtual network is enabled, the data integration unit (DIU) in all region groups are 2,400.
3 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data.
4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for this payload size doesn't relate to the amount of data you can move and process with Azure Data Factory. Learn about the symptoms and recommendation if you hit this limit.
1 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data.
2 On-demand HDInsight cores are allocated out of the subscription that contains the data factory. As a result, the previous limit is the Data Factory-enforced core limit for on-demand HDInsight cores. It's different from the core limit that's associated with your Azure subscription.
3 The cloud data movement unit (DMU) for version 1 is used in a cloud-to-cloud copy operation, learn more from Cloud data movement units (version 1) . For information on billing, see Azure Data Factory pricing .
Resource Default lower limit Minimum limitAzure Resource Manager has limits for API calls. You can make API calls at a rate within the Azure Resource Manager API limits .
Azure NetApp Files has a regional limit for capacity. The standard capacity limit for each subscription is 25 TiB, per region, across all service levels. To increase the capacity, use the Service and subscription limits (quotas) support request.
To learn more about the limits for Azure NetApp Files, see Resource limits for Azure NetApp Files .
There's a maximum count for each object type for Azure Policy. For definitions, an entry of Scope means the management group or subscription. For assignments and exemptions, an entry of Scope means the management group , subscription, resource group, or individual resource.
Where Maximum countPolicy rules have additional limits to the number of conditions and their complexity. See Policy rule limits for more details.
The Azure Quantum Service supports both first and third-party service providers. Third-party providers own their limits and quotas. Users can view offers and limits in the Azure portal when configuring third-party providers.
You can find the published quota limits for Microsoft's first party Optimization Solutions provider below.
While on the Learn & Develop SKU, you cannot request an increase on your quota limits. Instead you should switch to the Performance at Scale SKU.
Reach out to Azure Support to request a limit increase.
For more information, please review the Azure Quantum pricing page . Review the relevant provider pricing pages in the Azure portal for details on third-party offerings.
1 Describes the number of jobs that can be queued at the same time.
The following limits apply to Azure role-based access control (Azure RBAC) .
Resource LimitTo request an update to your subscription's default limits, open a support ticket.
For more information about how connections and messages are counted, see Messages and connections in Azure SignalR Service .
If your requirements exceed the limits, switch from Free tier to Standard tier and add units. For more information, see How to scale an Azure SignalR Service instance? .
If your requirements exceed the limits of a single instance, add instances. For more information, see How to scale SignalR Service with multiple instances? .
To learn more about the limits for Azure Spring Apps, see Quotas and service plans for Azure Spring Apps .
This section lists the following limits for Azure Storage:
The following table describes default limits for Azure general-purpose v2 (GPv2), general-purpose v1 (GPv1), and Blob storage accounts. The ingress limit refers to all data that is sent to a storage account. The egress limit refers to all data that is received from a storage account.
Microsoft recommends that you use a GPv2 storage account for most scenarios. You can easily upgrade a GPv1 or a Blob storage account to a GPv2 account with no downtime and without the need to copy data. For more information, see Upgrade to a GPv2 storage account .
You can request higher capacity and ingress limits. To request an increase, contact Azure Support .
Maximum number of storage accounts with standard endpoints per region per subscription, including standard and premium storage accounts. 250 by default, 500 by request 1 Maximum number of storage accounts with Azure DNS zone endpoints (preview) per region per subscription, including standard and premium storage accounts. 5000 (preview) Default maximum storage account capacity 5 PiB 2 Maximum number of blob containers, blobs, file shares, tables, queues, entities, or messages per storage account. No limit Default maximum request rate per storage account 20,000 requests per second 2 Default maximum ingress per general-purpose v2 and Blob storage account in the following regions (LRS/GRS):1 With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see Increase Azure Storage account quotas . 2 Azure Storage standard accounts support higher capacity limits and higher limits for ingress and egress by request. To request an increase in account limits, contact Azure Support .
The following limits apply only when you perform management operations by using Azure Resource Manager with Azure Storage. The limits apply per region of the resource in the request.
Resource Limit1 Throughput for a single blob depends on several factors. These factors include but aren't limited to: concurrency, request size, performance tier, speed of source for uploads, and destination for downloads. To take advantage of the performance enhancements of high-throughput block blobs , upload larger blobs or blocks. Specifically, call the Put Blob or Put Block operation with a blob or block size that is greater than 4 MiB for standard storage accounts. For premium block blob or for Data Lake Storage Gen2 storage accounts, use a block or blob size that is greater than 256 KiB.
2 Page blobs aren't yet supported in accounts that have a hierarchical namespace enabled.
The following table describes the maximum block and blob sizes permitted by service version.
Service version Maximum block size (via Put Block) Maximum blob size (via Put Block List) Maximum blob size via single write operation (via Put Blob) Version 2016-05-31 through version 2019-07-07 100 MiB Approximately 4.75 TiB (100 MiB X 50,000 blocks) 256 MiB Versions prior to 2016-05-31 4 MiB Approximately 195 GiB (4 MiB X 50,000 blocks) 64 MiBThe following table describes capacity, scalability, and performance targets for Table storage.
Resource Target Number of tables in an Azure storage account Limited only by the capacity of the storage account Number of partitions in a table Limited only by the capacity of the storage account Number of entities in a partition Limited only by the capacity of the storage account Maximum size of a single table 500 TiB Maximum size of a single entity, including all property values 1 MiB Maximum number of properties in a table entity 255 (including the three system properties, PartitionKey , RowKey , and Timestamp ) Maximum total size of an individual property in an entity Varies by property type. For more information, see Property Types in Understanding the Table Service Data Model . Size of the PartitionKey A string up to 1 KiB in size Size of the RowKey A string up to 1 KiB in size Size of an entity group transaction A transaction can include at most 100 entities and the payload must be less than 4 MiB in size. An entity group transaction can include an update to an entity only once. Maximum number of stored access policies per table Maximum request rate per storage account 20,000 transactions per second, which assumes a 1-KiB entity size Target throughput for a single table partition (1 KiB-entities) Up to 2,000 entities per secondTo learn more about the creation limits for Azure subscriptions, see Billing accounts and scopes in the Azure portal .
The following table describes the maximum limits for Azure Virtual Desktop.
Azure Virtual Desktop Object Per Parent Container Object Service Limit1 If you require over 500 Application groups then please raise a support ticket via the Azure portal.
All other Azure resources used in Azure Virtual Desktop such as Virtual Machines, Storage, Networking etc. are all subject to their own resource limitations documented in the relevant sections of this article. To visualise the relationship between all the Azure Virtual Desktop objects, review this article Relationships between Azure Virtual Desktop logical components .
To get started with Azure Virtual Desktop, use the getting started guide . For deeper architectural content for Azure Virtual Desktop, use the Azure Virtual Desktop section of the Cloud Adoption Framework . For pricing information for Azure Virtual Desktop, add "Azure Virtual Desktop" within the Compute section of the Azure Pricing Calculator .
The following table describes the maximum limits for Azure VMware Solution.
Resource Limit Maximum number of Azure VMware Solution ExpressRoute linked private clouds from a single location to a single Virtual Network Gateway 4* For information about Recovery Point Objective (RPO) lower than 15 minutes, see How the 5 Minute Recovery Point Objective Works in the vSphere Replication Administration guide .
For other VMware-specific limits, use the VMware configuration maximum tool .
For a summary of Azure Backup support settings and limitations, see Azure Backup Support Matrices .
1 For capacity management purposes, the default quotas for new Batch accounts in some regions and for some subscription types have been reduced from the above range of values. In some cases, these limits have been reduced to zero. When you create a new Batch account, check your quotas and request an appropriate core or service quota increase , if necessary. Alternatively, consider reusing Batch accounts that already have sufficient quota or user subscription pool allocation Batch accounts to maintain core and VM family quota across all Batch accounts on the subscription. Service quotas like active jobs or pools apply to each distinct Batch account even for user subscription pool allocation Batch accounts.
2 To request an increase beyond this limit, contact Azure Support.
Default limits vary depending on the type of subscription you use to create a Batch account. Cores quotas shown are for Batch accounts in Batch service mode. View the quotas in your Batch account .
If you use classic deployment model instead of the Azure Resource Manager deployment model, the following limits apply.
Resource Default limit Maximum limit1 Extra small instances count as one vCPU toward the vCPU limit despite using a partial CPU core.
2 The storage account limit includes both Standard and Premium storage accounts.
1
To request a limit increase, create an
Azure Support request
. Free subscriptions including
Azure Free Account
and
Azure for Students
aren't eligible for limit or quota increases. If you have a free subscription, you can
upgrade
to a Pay-As-You-Go subscription.
2
Default limit for
Pay-As-You-Go
subscription. Limit may differ for other category types.
The following table details the features and limits of the Basic, Standard, and Premium service tiers .
Resource Basic Standard Premium1 Storage included in the daily rate for each tier. Additional storage may be used, up to the registry storage limit, at an additional daily rate per GiB. For rate information, see Azure Container Registry pricing . If you need storage beyond the registry storage limit, please contact Azure Support.
2 ReadOps , WriteOps , and Bandwidth are minimum estimates. Azure Container Registry strives to improve performance as usage requires. Both resources, ACR, and the device must be in the same region to achieve a fast download speed.
3 A docker pull translates to multiple read operations based on the number of layers in the image, plus the manifest retrieval.
4
A
docker push
translates to multiple write operations, based on the number of layers that must be pushed. A
docker push
includes
ReadOps
to retrieve a manifest for an existing image.
5
Individual
actions
of
content/delete
,
content/read
,
content/write
,
metadata/read
,
metadata/write
corresponds to the limit of Repositories per scope map.
A Content Delivery Network subscription can contain one or more Content Delivery Network profiles. A Content Delivery Network profile can contain one or more Content Delivery Network endpoints. You might want to use multiple profiles to organize your Content Delivery Network endpoints by internet domain, web application, or some other criteria.
Azure Data Lake Analytics makes the complex task of managing distributed infrastructure and complex code easy. It dynamically provisions resources, and you can use it to do analytics on exabytes of data. When the job completes, it winds down resources automatically. You pay only for the processing power that was used. As you increase or decrease the size of data stored or the amount of compute used, you don't have to rewrite code. To raise the default limits for your subscription, contact support.
Resource Limit Comments Maximum number of analytics units (AUs) per account Use any combination of up to a maximum of 250 AUs across 20 jobs. To increase this limit, contact Microsoft Support. Maximum script size for job submission Maximum number of Data Lake Analytics accounts per region per subscription To increase this limit, contact Microsoft Support.Azure Data Factory is a multitenant service that has the following default limits in place to make sure customer subscriptions are protected from each other's workloads. To raise the limits up to the maximum for your subscription, contact support.
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration units (version 2) . For information on billing, see Azure Data Factory pricing .
2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress costs.
Region group Regions Region group 1 Central US, East US, East US 2, North Europe, West Europe, West US, West US 2 Region group 2 Australia East, Australia Southeast, Brazil South, Central India, Japan East, North Central US, South Central US, Southeast Asia, West Central US Region group 3 Other regionsIf managed virtual network is enabled, the data integration unit (DIU) in all region groups are 2,400.
3 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data.
4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for this payload size doesn't relate to the amount of data you can move and process with Azure Data Factory. Learn about the symptoms and recommendation if you hit this limit.
1 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is designed to scale to handle petabytes of data.
2 On-demand HDInsight cores are allocated out of the subscription that contains the data factory. As a result, the previous limit is the Data Factory-enforced core limit for on-demand HDInsight cores. It's different from the core limit that's associated with your Azure subscription.
3 The cloud data movement unit (DMU) for version 1 is used in a cloud-to-cloud copy operation, learn more from Cloud data movement units (version 1) . For information on billing, see Azure Data Factory pricing .
Resource Default lower limit Minimum limitAzure Resource Manager has limits for API calls. You can make API calls at a rate within the Azure Resource Manager API limits .
Azure Data Lake Storage Gen2 is not a dedicated service or storage account type. It is the latest release of capabilities that are dedicated to big data analytics. These capabilities are available in a general-purpose v2 or BlockBlobStorage storage account, and you can obtain them by enabling the Hierarchical namespace feature of the account. For scale targets, see these articles.
Azure Data Lake Storage Gen1 is a dedicated service. It's an enterprise-wide hyper-scale repository for big data analytic workloads. You can use Data Lake Storage Gen1 to capture data of any size, type, and ingestion speed in one single place for operational and exploratory analytics. There's no limit to the amount of data you can store in a Data Lake Storage Gen1 account.
Resource Limit Comments Maximum number of Data Lake Storage Gen1 accounts, per subscription, per region To request an increase for this limit, contact support. Maximum number of access ACLs, per file or folder This is a hard limit. Use groups to manage access with fewer entries. Maximum number of default ACLs, per file or folder This is a hard limit. Use groups to manage access with fewer entries.Azure Data Share enables organizations to simply and securely share data with their customers and partners.
Resource LimitWhen a given resource or operation doesn't have adjustable limits, the default and the maximum limits are the same. When the limit can be adjusted, the following table includes both the default limit and maximum limit. The limit can be raised above the default limit but not above the maximum limit. Limits can only be adjusted for the Standard SKU. Limit adjustment requests are not accepted for Free SKU. Limit adjustment requests are evaluated on a case-by-case basis and approvals are not guaranteed.
If you want to raise the limit or quota above the default limit, open an online customer support request .
This table provides the limits for the Device Update for IoT Hub resource in Azure Resource Manager:
Resource Standard SKU Limit Free SKU Limit Adjustable? Number of active deployments per instance 50 (includes 1 reserved deployment for Cancels) 5 (includes 1 reserved deployment for Cancels) Number of total deployments per instance Number of update providers per instance Number of update names per provider per instance Number of update versions per update provider and name per instance Total number of updates per instance Maximum single update file size Maximum combined size of all files in a single import action Total data storage included per instance 100 GBSome areas of this service have adjustable limits, and others do not. This is represented in the tables below with the Adjustable? column. When the limit can be adjusted, the Adjustable? value is Yes .
The following table lists the functional limits of Azure Digital Twins.
Capability Default limit Adjustable? Digital twins Number of digital twins that can be imported in a single Jobs API job 2,000,000 Digital twins Number of incoming relationships to a single twin 50,000 Digital twins Number of outgoing relationships from a single twin 50,000 Digital twins Total number of relationships in an Azure Digital Twins instance 20,000,000 Digital twins Number of relationships that can be imported in a single Jobs API job 10,000,000 Digital twins Maximum size (of JSON body in a PUT or PATCH request) of a single twin 32 KB Digital twins Maximum request payload size 32 KB Digital twins Maximum size of a string property value (UTF-8) Digital twins Maximum size of a property name Routing Number of endpoints for a single Azure Digital Twins instance Routing Number of routes for a single Azure Digital Twins instance Models Number of models within a single Azure Digital Twins instance 10,000 Models Number of models that can be imported in a single API call (not using the Jobs API ) Models Number of models that can be imported in a single Jobs API job 10,000 Models Maximum size (of JSON body in a PUT or PATCH request) of a single model Models Number of items returned in a single page Query Number of items returned in a single page Query Number of
AND
/
OR
expressions in a query
Query
Number of array items in an
IN
/
NOT IN
clause
Query
Number of characters in a query
8,000
Query
Number of
JOINS
in a query
The following table reflects the rate limits of different APIs.
Capability Default limit Adjustable? Digital Twins API Number of create/delete operations per second across all twins and relationships Digital Twins API Number of create/update/delete operations per second on a single twin or its incoming/outgoing relationships Digital Twins API Number of outstanding operations on a single twin or its incoming/outgoing relationships Query API Number of requests per second Query API Query Units per second 4,000 Event Routes API Number of requests per secondLimits on data types and fields within DTDL documents for Azure Digital Twins models can be found within its spec documentation in GitHub: Digital Twins Definition Language (DTDL) - version 2 .
Query latency details are described in Query language . Limitations of particular query language features can be found in the query reference documentation .
Azure Event Grid namespaces is available in public preview and enables MQTT messaging, and HTTP pull delivery. The following limits apply to namespace resources in Azure Event Grid.
Limit description Limit MQTT inbound publish requests Up to 1,000 messages per second or 1 MB per second per TU (whichever comes first) MQTT outbound publish requests Up to 1,000 messages per second or 1 MB per second per TU Clients 10,000 clients per TU CA certificates Client groups Topic spaces Topic templates 10 per topic space Permission bindings Max message size 512 KB Topic size 256 B Topic alias 10 per connection New connect requests 200 requests per second per TU Subscribe and unsubscribe operations 200 requests per second per TU Total number of subscriptions per MQTT client session Maximum number of topic filters per MQTT SUBSCRIBE packet Maximum number of segments per topic filter Maximum number of concurrent connections allowed per namespace 10,000 per TUThe following limits apply to events in Azure Event Grid namespace resource.
Limit description Limit Event ingress Up to 1,000 events per second or 1 MB per second per TU (whichever comes first) Event egress Up to 2,000 events per second or 2 MB per second per TU Event duration period in topic 1 day Subscriptions per topic Connected clients per namespace (queue subscriptions) 1,000 Maximum event size Batch size Events per request 1,000The following limits apply to Azure Event Grid custom topic, system topic and partner topic resources.
Limit description Limit Custom topics per Azure subscription 100The following limits apply to Azure Event Grid domain resource.
Limit description Limit Publish rate for a domain (ingress) 5,000 events per second or 5 MB per second (whichever comes first) Private endpoint connections per domain IP Firewall rules per topicThe following tables provide quotas and limits specific to Azure Event Hubs . For information about Event Hubs pricing, see Event Hubs pricing .
The following limits are common across all tiers.
Limit Notes Value Size of a consumer group name Kafka protocol doesn't require the creation of a consumer group.Kafka: 256 characters
AMQP: 50 characters
Number of non-epoch receivers per consumer group Number of authorization rules per namespace Subsequent requests for authorization rule creation are rejected. Number of calls to the GetRuntimeInformation method 50 per second Number of virtual networks (VNet) Number of IP Config rules Maximum length of a schema group name Maximum length of a schema name Size in bytes per schema Number of properties per schema group Size in bytes per schema group property key Size in bytes per schema group property valueThe following table shows limits that may be different for basic, standard, premium, and dedicated tiers.
* Depends on various factors such as resource allocation, number of partitions, storage, and so on.
You can publish events individually or batched. The publication limit (according to SKU) applies regardless of whether it is a single event or a batch. Publishing events larger than the maximum threshold will be rejected.
IoT Central limits the number of applications you can deploy in a subscription to 100. If you need to increase this limit, contact Microsoft support . To learn more, see Azure IoT Central quota and limits .
The following table lists the limits associated with the different service tiers S1, S2, S3, and F1. For information about the cost of each unit in each tier, see Azure IoT Hub pricing .
Resource S1 Standard S2 Standard S3 Standard F1 FreeIf you anticipate using more than 200 units with an S1 or S2 tier hub or 10 units with an S3 tier hub, contact Microsoft Support.
The following table lists the limits that apply to IoT Hub resources.
Resource Limit Maximum size of device-to-cloud batch AMQP and HTTP: 256 KB for the entire batchIf you need more than 50 paid IoT hubs in an Azure subscription, contact Microsoft Support.
Currently, the total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. If you want to increase this limit, contact Microsoft Support .
IoT Hub throttles requests when the following quotas are exceeded.
Throttle Per-hub value Identity registry operationsSome areas of this service have adjustable limits. This is represented in the tables below with the Adjustable? column. When the limit can be adjusted, the Adjustable? value is Yes .
The actual value to which a limit can be adjusted may vary based on each customer’s deployment. Multiple instances of DPS may be required for very large deployments.
If your business requires raising an adjustable limit or quota above the default limit, you can submit a request for additional resources by opening a support ticket . Requesting an increase does not guarantee that it will be granted, as it needs to be reviewed on a case-by-case basis. Please contact Microsoft support as early as possible during your implementation, to be able to determine if your request could be approved and plan accordingly.
The following table lists the limits that apply to Azure IoT Hub Device Provisioning Service resources.
Resource Limit Adjustable?If the hard limit on symmetric key enrollment groups is a blocking issue, it is recommended to use individual enrollments as a workaround.
The Device Provisioning Service has the following rate limits.
Per-unit value Adjustable?Azure Key Vault service supports two resource types: Vaults and Managed HSMs. The following two sections describe the service limits for each of them respectively.
This section describes service limits for resource type
vaults
.
In the previous table, we see that for RSA 2,048-bit software keys, 4,000 GET transactions per 10 seconds are allowed. For RSA 2,048-bit HSM-keys, 2,000 GET transactions per 10 seconds are allowed.
The throttling thresholds are weighted, and enforcement is on their sum. For example, as shown in the previous table, when you perform GET operations on RSA HSM-keys, it's eight times more expensive to use 4,096-bit keys compared to 2,048-bit keys. That's because 2,000/250 = 8.
In a given 10-second interval, an Azure Key Vault client can do
only one
of the following operations before it encounters a
429
throttling HTTP status code:
For information on how to handle throttling when these limits are exceeded, see Azure Key Vault throttling guidance .
1 A subscription-wide limit for all transaction types is five times per key vault limit.
When you back up a key vault object, such as a secret, key, or certificate, the backup operation will download the object as an encrypted blob. This blob cannot be decrypted outside of Azure. To get usable data from this blob, you must restore the blob into a key vault within the same Azure subscription and Azure geography
Transactions type Maximum key vault object versions allowedAttempting to backup a key, secret, or certificate object with more versions than above limit will result in an error. It is not possible to delete previous versions of a key, secret, or certificate.
Key Vault does not restrict the number of keys, secrets or certificates that can be stored in a vault. The transaction limits on the vault should be taken into account to ensure that operations are not throttled.
Key Vault does not restrict the number of versions on a secret, key or certificate, but storing a large number of versions (500+) can impact the performance of backup operations. See Azure Key Vault Backup .
This section describes service limits for resource type
managed HSM
.
Each managed identity counts towards the object quota limit in an Azure AD tenant as described in Azure AD service limits and restrictions .
The rate at which managed identities can be created have the following limits:
The rate at which a user-assigned managed identity can be assigned with an Azure resource :
For resources that aren't fixed, open a support ticket to ask for an increase in the quotas. Don't create additional Azure Media Services accounts in an attempt to obtain higher limits.
1 The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If your source file is larger than 260-GB, your Job will likely fail.
2 The storage accounts must be from the same Azure subscription.
3 This number includes queued, finished, active, and canceled Jobs. It does not include deleted Jobs.
Any Job record in your account older than 90 days will be automatically deleted, even if the total number of records is below the maximum quota.
4 For detailed information about Live Event limitations, see Live Event types comparison and limitations .
5 Live Outputs start on creation and stop when deleted.
6 When using a custom Streaming Policy , you should design a limited set of such policies for your Media Service account, and re-use them for your StreamingLocators whenever the same encryption options and protocols are needed. You should not be creating a new Streaming Policy for each Streaming Locator.
7 Streaming Locators are not designed for managing per-user access control. To give different access rights to individual users, use Digital Rights Management (DRM) solutions.
For resources that are not fixed, you may ask for the quotas to be raised, by opening a
support ticket
. Include detailed information in the request on the desired quota changes, use-case scenarios, and regions required.
Do
not
create additional Azure Media Services accounts in an attempt to obtain higher limits.
For limits specific to Media Services v2 (legacy), see Media Services v2 (legacy)
For more information on limits and pricing, see Azure Mobile Services pricing .
The following limits apply only for networking resources managed through Azure Resource Manager per region per subscription. Learn how to view your current resource usage against your subscription limits .
We have increased all default limits to their maximum limits. If there's no maximum limit column, the resource doesn't have adjustable limits. If you had these limits manually increased by support in the past and are currently seeing limits lower than what is listed in the following tables, open an online customer support request at no charge
Application security groups that can be specified within all security rules of a network security group User-defined route tables User-defined routes per route table Point-to-site root certificates per Azure VPN Gateway Point-to-site revoked client certificates per Azure VPN Gateway Virtual network TAPs Network interface TAP configurations per virtual network TAP1 Default limits for Public IP addresses vary by offer category type, such as Free Trial, Pay-As-You-Go, CSP. For example, the default for Enterprise Agreement subscriptions is 1000.
2 Public IP addresses limit refers to the total amount of Public IP addresses, including Basic and Standard.
The following limits apply only for networking resources managed through Azure Resource Manager per region per subscription. Learn how to view your current resource usage against your subscription limits .
Standard Load Balancer
Resource Limit1 Backend IP configurations are aggregated across all load balancer rules including load balancing, inbound NAT, and outbound rules. Each rule a backend pool instance is configured to counts as one configuration.
Load Balancer doesn't apply any throughput limits. However, throughput limits for virtual machines and virtual networks still apply. For more information, see Virtual machine network bandwidth .
Gateway Load Balancer
Resource Limit Resources chained per Load Balancer (LB frontend configurations or VM NIC IP configurations combined)All limits for Standard Load Balancer also apply to Gateway Load Balancer.
Basic Load Balancer
Resource Limit3 The limit for a single discrete resource in a backend pool (standalone virtual machine, availability set, or virtual machine scale-set placement group) is to have up to 250 Frontend IP configurations across a single Basic Public Load Balancer and Basic Internal Load Balancer.
The following limits apply only for networking resources managed through the classic deployment model per subscription. Learn how to view your current resource usage against your subscription limits .
Resource Default limit Maximum limit Concurrent TCP or UDP flows per NIC of a virtual machine or role instance 500,000, up to 1,000,000 for two or more NICs. 500,000, up to 1,000,000 for two or more NICs. Network Security Groups (NSGs) NSG rules per NSG 1,000 User-defined route tables User-defined routes per route table Public IP addresses (dynamic) Reserved public IP addresses Public IP per deployment Contact support Private IP (internal load balancing) per deployment Endpoint access control lists (ACLs)The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise stated.
Resource Limit HTTP listeners 200 1 Limited to 100 active listeners that are routing traffic. Active listeners = total number of listeners - listeners not active.1 The number of resources listed in the table applies to standard Application Gateway SKUs and WAF-enabled SKUs running CRS 3.2 or higher. For WAF-enabled SKUs running CRS 3.1 or lower, the supported number is 40. For more information, see WAF engine .
2 Limit is per Application Gateway instance not per Application Gateway resource.
3 Must define the value via WAF Policy for Application Gateway.
An instance is an optimized Azure VM that is created when you configure Azure Bastion. When you configure Azure Bastion using the Basic SKU, 2 instances are created. If you use the Standard SKU, you can specify the number of instances between 2-50.
Workload Type* Session Limit per Instance**
*These workload types are defined here:
Remote Desktop workloads
**These limits are based on RDP performance tests for Azure Bastion. The numbers may vary due to other on-going RDP sessions or other on-going SSH sessions.
Public DNS zones
Resource Limit Number of private DNS zones a virtual network can get linked to with autoregistration enabled Number of private DNS zones a virtual network can get linkedAzure-provided DNS resolver
Resource Limit1 These limits are applied to every individual virtual machine and not at the virtual network level. DNS queries exceeding these limits are dropped.
DNS private resolver 1
Resource Limit1 Different limits might be enforced by the Azure portal until the portal is updated. Use PowerShell to provision elements up to the most current limits.
Resources with * are new limits for Azure Front Door Standard and Premium.
After the HTTP request gets forwarded to the back end, Azure Front Door waits for 60 seconds (Standard and Premium) or 30 seconds (classic) for the first packet from the back end. Then it returns a 503 error to the client, or 504 for a cached request. You can configure this value using the originResponseTimeoutSeconds field in Azure Front Door Standard and Premium API, or the sendRecvTimeoutSeconds field in the Azure Front Door (classic) API.
After the back end receives the first packet, if the origin pauses for any reason in the middle of the response body beyond the originResponseTimeoutSeconds or sendRecvTimeoutSeconds, the response will be canceled.
Front Door takes advantage of HTTP keep-alive to keep connections open for reuse from previous requests. These connections have an idle timeout of 90 seconds. Azure Front Door would disconnect idle connections after reaching the 90-second idle timeout. This timeout value can't be configured.
For more information about limits that apply to Rules Engine configurations, see rules engine terminology
1 If your NVA advertises more routes than the limit, the BGP session gets dropped.
2 The number of VMs that Azure Route Server can support isn’t a hard limit and it depends on the availability and performance of the underlying infrastructure.
The total number of routes advertised from VNet address space and Route Server towards ExpressRoute circuit, when Branch-to-branch enabled, must not exceed 1,000. For more information, see Route advertisement limits of ExpressRoute.
Global Reach connections count against the limit of virtual network connections per ExpressRoute Circuit. For example, a 10 Gbps Premium Circuit would allow for 5 Global Reach connections and 95 connections to the ExpressRoute Gateways or 95 Global Reach connections and 5 connections to the ExpressRoute Gateways or any other combination up to the limit of 100 connections for the circuit.
The following table shows the gateway types and the estimated performance scale numbers. These numbers are derived from the following testing conditions and represent the max support limits. Actual performance may vary, depending on how closely traffic replicates these testing conditions.
Important
The following limits apply to NAT gateway resources managed through Azure Resource Manager per region per subscription. Learn how to view your current resource usage against your subscription limits .
Resource Limit Connections to same destination endpoint 50,000 connections to the same destination per public IP Connections total 2M connections per NAT gatewayThe following limits apply to Azure private link:
Resource Limit Number of IP Configurations on a private link service 8 (This number is for the NAT IP addresses used per PLS) Number of private endpoints on the same private link service Number of subscriptions allowed in visibility setting on private link service Number of subscriptions allowed in auto-approval setting on private link service Number of private endpoints per key vault Number of key vaults with private endpoints per subscription Number of private DNS zone groups that can be linked to a private endpoint Number of DNS zones in each groupThe latest values for Microsoft Purview quotas can be found in the Microsoft Purview quota page .
The following limit applies to analytics rules in Microsoft Sentinel.
Description Limit Dependency Alerts per ruleThe following limits apply to incidents in Microsoft Sentinel.
Description Limit Dependency Total count of these assets per machine learning workspace: datasets, runs, models, and artifacts 10 million assets Azure Machine Learning Default limit for total compute clusters per region. Limit is shared between a training cluster and a compute instance. A compute instance is considered a single-node cluster for quota purposes. 200 compute clusters per region Azure Machine Learning Storage accounts per region per subscription 250 storage accounts Azure Storage Maximum size of a file share by default Azure Storage Maximum size of a file share with large file share feature enabled 100 TB Azure Storage Maximum throughput (ingress + egress) for a single file share by default 60 MB/sec Azure Storage Maximum throughput (ingress + egress) for a single file share with large file share feature enabled 300 MB/sec Azure StorageThe following limits apply to repositories in Microsoft Sentinel.
Description Limit Dependency Indicators per call that use Graph security API 100 indicators Microsoft Graph security API CSV indicator file import size JSON indicator file import size 250MBThe following limit applies to the threat intelligence upload indicators API in Microsoft Sentinel.
Description Limit Dependency Lowest retention configuration in days for the IdentityInfo table. All data stored on the IdentityInfo table in Log Analytics is refreshed every 14 days. 14 days Log AnalyticsThe following limits apply to watchlists in Microsoft Sentinel. The limits are related to the dependencies on other services used by watchlists.
Description Limit Dependency Total number of active watchlist items per workspace. When the max count is reached, delete some existing items to add a new watchlist. 10 million active watchlist items Log Analytics Total rate of change of all watchlist items per workspace 1% rate of change per month Log Analytics Number of large watchlist uploads per workspace at a time One large watchlist Azure Cosmos DB Number of large watchlist deletions per workspace at a time One large watchlist Azure Cosmos DBWorkbook limits for Sentinel are the same result limits found in Azure Monitor. For more information, see Workbooks result limits .
The following table lists quota information specific to Azure Service Bus messaging. For information about pricing and other quotas for Service Bus, see Service Bus pricing .
Quota name Scope Value Notes Namespace 1000 (default and maximum) Subsequent requests for additional namespaces are rejected. Queue or topic size Entity1, 2, 3, 4 GB or 5 GB
In the Premium SKU, and the Standard SKU with partitioning enabled, the maximum queue or topic size is 80 GB.
Total size limit for a premium namespace is 1 TB per messaging unit . Total size of all entities in a namespace can't exceed this limit.
Defined upon creation/updation of the queue or topic.Currently, a large message (size > 1 MB) sent to a queue is counted twice. And, a large message (size > 1 MB) sent to a topic is counted X + 1 times, where X is the number of subscriptions to the topic.
Number of concurrent connections on a namespace Namespace Net Messaging: 1,000.If you want to have more partitioned entities in a basic or a standard tier namespace, create additional namespaces.
Maximum size of any messaging entity path: queue or topic Entity 260 characters. Maximum size of any messaging entity name: namespace, subscription, or subscription rule Entity 50 characters. Maximum size of a message ID Entity Maximum size of a message session ID Entity Message size for a queue, topic, or subscription entity Entity 256 KB for Standard tierMaximum message property size for each property is 32 KB.
Cumulative size of all properties can't exceed 64 KB. This limit applies to the entire header of the brokered message, which has both user properties and system properties, such as sequence number, label, and message ID.
Maximum number of header properties in property bag: byte/int.MaxValue .
The exception
SerializationException
is generated.
Number of subscriptions per topic
Entity
2,000 per-topic for the Standard tier and Premium tier.
Subsequent requests for creating additional subscriptions for the topic are rejected. As a result, if configured through the portal, an error message is shown. If called from the management API, an exception is received by the calling code.
Number of SQL filters per topic
Entity
2,000
Subsequent requests for creation of additional filters on the topic are rejected, and an exception is received by the calling code.
Number of correlation filters per topic
Entity
100,000
Subsequent requests for creation of additional filters on the topic are rejected, and an exception is received by the calling code.
Size of SQL filters or actions
Namespace
Maximum length of filter condition string: 1,024 (1 K).
For SQL Database limits, see SQL Database resource limits for single databases , SQL Database resource limits for elastic pools and pooled databases , and SQL Database resource limits for SQL Managed Instance .
The maximum number of private endpoints per Azure SQL Database logical server is 250.
Azure Synapse Analytics has the following default limits to ensure customer's subscriptions are protected from each other's workloads. To raise the limits to the maximum for your subscription, contact support.
For Pay-As-You-Go, Free Trial, Azure Pass, and Azure for Students subscription offer types:
Resource Default limit Maximum limitFor additional limits for Spark pools, see Concurrency and API rate limits for Apache Spark pools in Azure Synapse Analytics .
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration units (version 2) . For information on billing, see Azure Synapse Analytics Pricing .
2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress costs.
Region group Regions Region group 1 Central US, East US, East US 2, North Europe, West Europe, West US, West US 2 Region group 2 Australia East, Australia Southeast, Brazil South, Central India, Japan East, North Central US, South Central US, Southeast Asia, West Central US Region group 3 Other regionsIf managed virtual network is enabled, the data integration unit (DIU) in all region groups are 2,400.
3 Pipeline, data set, and linked service objects represent a logical grouping of your workload. Limits for these objects don't relate to the amount of data you can move and process with Azure Synapse Analytics. Synapse Analytics is designed to scale to handle petabytes of data.
4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for this payload size doesn't relate to the amount of data you can move and process with Azure Synapse Analytics. Learn about the symptoms and recommendation if you hit this limit.
For details of capacity limits for dedicated SQL pools in Azure Synapse Analytics, see dedicated SQL pool resource limits .
Azure Resource Manager has limits for API calls. You can make API calls at a rate within the Azure Resource Manager API limits .
You can attach a number of data disks to an Azure virtual machine (VM). Based on the scalability and performance targets for a VM's data disks, you can determine the number and type of disk that you need to meet your performance and capacity requirements.
Important
For optimal performance, limit the number of highly utilized disks attached to the virtual machine to avoid possible throttling. If all attached disks aren't highly utilized at the same time, the virtual machine can support a larger number of disks. Additionally, when creating a managed disk from an existing managed disk, only 49 disks can be created concurrently. More disks can be created after some of the initial 49 have been created.
For Azure managed disks:
The following table illustrates the default and maximum limits of the number of resources per region per subscription. The limits remain the same irrespective of disks encrypted with either platform-managed keys or customer-managed keys. There is no limit for the number of Managed Disks, snapshots and images per resource group.
1 An individual disk can have 500 incremental snapshots.
2 This is the default max but higher capacities are supported by request. To request an increase in capacity, request a quota increase or contact Azure Support.
For standard storage accounts:
A Standard storage account has a maximum total request rate of 20,000 IOPS. The total IOPS across all of your virtual machine disks in a Standard storage account should not exceed this limit.
For unmanaged disks, you can roughly calculate the number of highly utilized disks supported by a single standard storage account based on the request rate limit. For example, for a Basic tier VM, the maximum number of highly utilized disks is about 66, which is 20,000/300 IOPS per disk. The maximum number of highly utilized disks for a Standard tier VM is about 40, which is 20,000/500 IOPS per disk.
For premium storage accounts:
A premium storage account has a maximum total throughput rate of 50 Gbps. The total throughput across all of your VM disks should not exceed this limit.
For more information, see Virtual machine sizes .
There's a limitation of 1000 disk encryption sets per region, per subscription. For more information, see the encryption documentation for Linux or Windows virtual machines. If you need to increase the quota, contact Azure support.
*Applies only to disks with on-demand bursting enabled.
** Only applies to disks with performance plus (preview) enabled.
1 Ingress refers to all data from requests that are sent to a storage account. Egress refers to all data from responses that are received from a storage account.
Premium unmanaged virtual machine disks: Per-disk limits
Premium storage disk type Maximum number of schedules per bandwidth template A schedule for every hour, every day of the week. Maximum size of a tiered volume on physical devices 64 TB for StorSimple 8100 and StorSimple 8600 StorSimple 8100 and StorSimple 8600 are physical devices. Maximum size of a tiered volume on virtual devices in Azure 30 TB for StorSimple 8010*Maximum throughput per I/O type was measured with 100 percent read and 100 percent write scenarios. Actual throughput might be lower and depends on I/O mix and network conditions.
1 Virtual machines created by using the classic deployment model instead of Azure Resource Manager are automatically stored in a cloud service. You can add more virtual machines to that cloud service for load balancing and availability.
2 Input endpoints allow communications to a virtual machine from outside the virtual machine's cloud service. Virtual machines in the same cloud service or virtual network can automatically communicate with each other.
The following limits apply when you use Azure Resource Manager and Azure resource groups.
Resource Limit Azure Spot VM total cores per subscription 20 1 per region. Contact support to increase limit. VM per series, such as Dv2 and F, cores per subscription 20 1 per region. Contact support to increase limit. Availability sets per subscription 2,500 per region. Virtual machines per availability set Proximity placement groups per resource group Certificates per availability set 199 2 Certificates per subscription Unlimited 31 Default limits vary by offer category type, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350. For security, subscriptions default to 20 cores to prevent large core deployments. If you need more cores, submit a support ticket.
2 Properties such as SSH public keys are also pushed as certificates and count towards this limit. To bypass this limit, use the Azure Key Vault extension for Windows or the Azure Key Vault extension for Linux to install certificates.
3 With Azure Resource Manager, certificates are stored in the Azure Key Vault. The number of certificates is unlimited for a subscription. There's a 1-MB limit of certificates per deployment, which consists of either a single VM or an availability set.
Virtual machine cores have a regional total limit. They also have a limit for regional per-size series, such as Dv2 and F. These limits are separately enforced. For example, consider a subscription with a US East total VM core limit of 30, an A series core limit of 30, and a D series core limit of 30. This subscription can deploy 30 A1 VMs, or 30 D1 VMs, or a combination of the two not to exceed a total of 30 cores. An example of a combination is 10 A1 VMs and 20 D1 VMs.
There are limits, per subscription, for deploying resources using Compute Galleries:
To request higher usage limits for dev tunnels, open an issue in our GitHub repo . In the issue, include which limit you'd like increased and why.
另类的砖头 · Django ORM – 多表实例 | 菜鸟教程 1 年前 |