tasks.register('processTemplatesAdHoc') {
inputs.property('engine', TemplateEngineType.FREEMARKER)
inputs.files(fileTree('src/templates'))
.withPropertyName('sourceFiles')
.withPathSensitivity(PathSensitivity.RELATIVE)
inputs.property('templateData.name', 'docs')
inputs.property('templateData.variables', [year: '2013'])
outputs.dir(layout.buildDirectory.dir('genOutput2'))
.withPropertyName('outputDir')
doLast {
// Process the templates here
For more information about incremental builds, check out the
incremental build documentation.
Look at the build scan timeline view to identify tasks that could benefit from incremental builds.
This can also help you understand why tasks execute when you expect Gradle to skip them.
As you can see in the build scan above, the task was not up-to-date because one of its inputs
("timestamp") changed, forcing the task to re-run.
Sort tasks by duration to find the slowest tasks in your project.
The build cache is a Gradle optimization that stores task outputs for specific input.
When you later run that same task with the same input, Gradle retrieves the output from the build cache instead of running the task again.
By default, Gradle does not use the build cache.
To enable the build cache at build time, use the build-cache flag:
You can use a local build cache to speed up repeated builds on a single machine.
You can also use a shared build cache to speed up repeated builds across multiple machines.
Gradle Enterprise provides one.
Shared build caches can decrease build times for both CI and developer builds.
For more information about the build cache, check out the
build cache documentation.
Build scans can help you investigate build cache effectiveness.
In the performance screen, the "Build cache" tab shows you statistics about:
Sort by task duration on the timeline screen to highlight tasks with great time saving potential.
The build scan above shows that :task1 and :task3 could be improved and made cacheable
and shows why Gradle didn’t cache them.
The fastest task is one that doesn’t execute.
If you can find ways to skip tasks you don’t need to run, you’ll end up with a faster build overall.
If your build includes multiple subprojects, create tasks to build those subprojects
independently. This helps you get the most out of caching, since a change to one
subproject won’t force a rebuild for unrelated subprojects. And this helps reduce
build times for teams that work on unrelated subprojects: there’s no need for
front-end developers to build the back-end subprojects every time they change the
front-end. Documentation writers don’t need to build front-end or back-end code
even if the documentation lives in the same project as that code.
Instead, create tasks that match the needs of developers. You’ll still have a single
task graph for the whole project. Each group of users suggests a restricted view of
the task graph: turn that view into a Gradle workflow that excludes unnecessary tasks.
Gradle provides several features to create these workflows:
Create aggregate tasks: tasks with no action that only depend on other tasks, such as assemble
Defer configuration via gradle.taskGraph.whenReady() and others, so you can perform verification only when it’s necessary
By default, Gradle reserves 512MB of heap space for your build. This is plenty for most projects.
However, some very large builds might need more memory to hold Gradle’s model and caches.
If this is the case for you, you can specify a larger memory requirement.
Specify the following property in the gradle.properties file in your project root or your Gradle home:
As described in the build lifecycle chapter, a
Gradle build goes through 3 phases: initialization, configuration, and execution.
Configuration code always executes regardless of the tasks that run.
As a result, any expensive work performed during configuration slows down every invocation.
Even simple commands like gradle help and gradle tasks.
The next few subsections introduce techniques that can reduce time spent in the configuration phase.
You should avoid time-intensive work in the configuration phase.
But sometimes it can sneak into your build in non-obvious places.
It’s usually clear when you’re encrypting data or calling remote services during configuration if that code is in a build file.
But logic like this is more often found in plugins and occasionally custom task classes.
Any expensive work in a plugin’s apply() method or a tasks’s constructor is a red flag.
Every plugin and script that you apply to a project adds to the overall configuration time.
Some plugins have a greater impact than others.
That doesn’t mean you should avoid using plugins, but you should take care to only apply them where they’re needed.
For example, it’s easy to apply plugins to all subprojects via allprojects {} or subprojects {} even if not every project needs them.
In the above build scan example, you can see that the root build script applies the script-a.gradle
script to 3 subprojects inside the build:
This script takes 1 second to run. Since it applies to 3 subprojects,
this script cumulatively delays the configuration phase by 3 seconds.
In this situation, there are several ways to reduce the delay:
If only one subproject uses the script, you could remove the script
application from the other subprojects. This reduces the configuration delay
by two seconds in each Gradle invocation.
If multiple subprojects, but not all, use the script, you could refactor the script and
all surrounding logic into a custom plugin located in buildSrc.
Apply the custom plugin to only the relevant subprojects, reducing configuration delay and avoiding code duplication.
Plugin and task authors often write Groovy for its concise syntax, API extensions to the JDK, and functional methods using closures.
But Groovy syntax comes with the cost of dynamic interpretation. As a result, method calls in Groovy take more time and use
more CPU than method calls in Java or Kotlin.
You can reduce this cost with static Groovy compilation: add the @CompileStatic annotation to your Groovy classes when you don’t
explicitly require dynamic features. If you need dynamic Groovy in a method, add the @CompileDynamic annotation to that method.
Alternatively, you can write plugins and tasks in a statically compiled language such as Java or Kotlin.
Warning: Gradle’s Groovy DSL relies heavily on Groovy’s dynamic features. To use static compilation in your plugins, switch to Java-like syntax.
The following example defines a task that copies files without dynamic features:
project.tasks.register('copyFiles', Copy) { Task t ->
t.into(project.layout.buildDirectory.dir('output'))
t.from(project.configurations.getByName('compile'))
This example uses the register() and getByName() methods available on all Gradle “domain object containers”.
Domain object containers include tasks, configurations, dependencies, extensions, and more.
Some collections, such as TaskContainer, have dedicated types with extra methods like create,
which accepts a task type.
When you use static compilation, an IDE can:
Dependency resolution simplifies integrating third-party libraries and other dependencies into your projects.
Gradle contacts remote servers to discover and download dependencies. You can optimize the way you reference
dependencies to cut down on these remote server calls.
Managing third-party libraries and their transitive dependencies adds a significant
cost to project maintenance and build times.
Watch out for unused dependencies: when a third-party library stops being
used by isn’t removed from the dependency list. This happens frequently during refactors.
You can use the Gradle Lint plugin
to identify unused dependencies.
If you only use a small number of methods or classes in a third-party library, consider:
When Gradle resolves dependencies, it searches through each repository in the declared order.
To reduce the time spent searching for dependencies, declare the repository hosting
the largest number of your dependencies first. This minimizes the number of network requests
required to resolve all dependencies.
Limit the number of declared repositories to the minimum possible for your build to work.
If you’re using a custom repository server, create a virtual repository that aggregates
several repositories together. Then, add only that repository to your build file.
Dynamic versions (e.g. “2.+”), and changing versions (snapshots) force Gradle to contact remote
repositories to find new releases. By default, Gradle only checks once every 24 hours.
But you can change this programmatically with the following settings:
If a build file or initialization script lowers these values, Gradle queries repositories more often.
When you don’t need the absolute latest release of a dependency every time you build, consider
removing the custom values for these settings.
You can find all dependencies with dynamic versions via build scans:
You may be able to use fixed versions like "1.2" and "3.0.3.GA" that allow Gradle to cache versions.
If you must use dynamic and changing versions, tune the cache settings to best meet your needs.
Dependency resolution is an expensive process, both in terms of I/O and computation.
Gradle reduces the required network traffic through caching. But there is still a cost.
Gradle runs the configuration phase on every build. If you trigger dependency resolution
during the configuration phase, every build pays that cost.
If you evaluate a configuration file, your project pays the cost of dependency resolution during configuration.
Normally tasks evaluate these files, since you don’t need the files until you’re ready to do something with them in a task action.
Imagine you’re doing some debugging and want to display the files that make up a configuration.
To implement this, you might inject a print statement:
tasks.register<Copy>("copyFiles") {
println(">> Compilation deps: ${configurations.compileClasspath.get().files.map { it.name }}")
into(layout.buildDirectory.dir("output"))
from(configurations.compileClasspath)
tasks.register('copyFiles', Copy) {
println ">> Compilation deps: ${configurations.compileClasspath.files.name}"
into(layout.buildDirectory.dir('output'))
from(configurations.compileClasspath)
The files property forces Gradle to resolve the dependencies. In this example, that happens during the configuration phase.
Because the configuration phase runs on every build, all builds now pay the performance cost of dependency resolution.
You can avoid this cost with a doFirst() action:
tasks.register<Copy>("copyFiles") {
into(layout.buildDirectory.dir("output"))
// Store the configuration into a variable because referencing the project from the task action
// is not compatible with the configuration cache.
val compileClasspath: FileCollection = configurations.compileClasspath.get()
from(compileClasspath)
doFirst {
println(">> Compilation deps: ${compileClasspath.files.map { it.name }}")
tasks.register('copyFiles', Copy) {
into(layout.buildDirectory.dir('output'))
// Store the configuration into a variable because referencing the project from the task action
// is not compatible with the configuration cache.
FileCollection compileClasspath = configurations.compileClasspath
from(compileClasspath)
doFirst {
println ">> Compilation deps: ${compileClasspath.files.name}"
Note that the from() declaration doesn’t resolve the dependencies because you’re using the dependency configuration itself as an argument, not the files.
The Copy task resolves the configuration itself during task execution.
The "Dependency resolution" tab on the performance page of a build scan shows dependency
resolution time during the configuration and execution phases:
Build scans provide another means of identifying this issue.
Your build should spend 0 seconds resolving dependencies during "project configuration".
This example shows the build resolves dependencies too early in the lifecycle.
You can also find a "Settings and suggestions" tab on the "Performance" page.
This shows dependencies resolved during the configuration phase.
Gradle allows users to model dependency resolution in the way that best suits them.
Simple customizations, such as forcing specific versions of a dependency or substituting
one dependency for another, don’t have a big impact on dependency resolution times.
More complex customizations, such as custom logic that downloads and parses POMs,
can slow down dependency resolution signficantly.
Use build scans or profile reports to check that custom dependency resolution logic
doesn’t adversely affect dependency resolution times.
This could be custom logic you have written yourself, or it could be part of a plugin.
Slow dependency downloads can impact your overall build performance.
Several things could cause this, including a slow internet connection or an overloaded repository server.
On the "Performance" page of a build scan, you’ll find a "Network Activity" tab.
This tab lists information including:
In the following example, two slow dependency downloads took 20 and 40 seconds and slowed down the overall
performance of a build:
Check the download list for unexpected dependency downloads.
For example, you might see a download caused by a dependency using a dynamic version.
Eliminate these slow or unexpected downloads by switching to a different repository or dependency.
The following sections apply only to projects that use the java plugin or another JVM language.
Projects often spend much of their build time testing.
These could be a mixture of unit and integration tests. Integration tests usually take longer.
Build scans can help you identify the slowest tests. You can then focus on speeding up those tests.
The above build scan shows an interactive test report for all projects in which tests ran.
Gradle has several ways to speed up tests:
Gradle can run multiple test cases in parallel.
To enable this feature, override the value of maxParallelForks on the relevant Test task.
For the best performance, use some number less than or equal to the number of available CPU cores:
tasks.withType<Test>().configureEach {
maxParallelForks = (Runtime.getRuntime().availableProcessors() / 2).coerceAtLeast(1)
tasks.withType(Test).configureEach {
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
Tests in parallel must be independent. They should not share resources such as files or databases.
If your tests do share resources, they could interfere with each other in random and unpredictable ways.
By default, Gradle runs all tests in a single forked VM.
If there are a lot of tests, or some tests that consume lots of memory,
your tests may take longer than you expect to run. You can increase the
heap size, but garbage collection may slow down your tests.
Alternatively, you can fork a new test VM after a certain number of tests have run with the forkEvery setting:
Gradle automatically creates test reports regardless of whether you want to look at them.
That report generation slows down the overall build. You may not need reports if:
tasks.withType<Test>().configureEach {
reports.html.required = false
reports.junitXml.required = false
tasks.withType(Test).configureEach {
reports.html.required = false
reports.junitXml.required = false
You might want to conditionally enable reports so you don’t have to edit the build file to see them.
To enable the reports based on a project property, check for the presence of a property before disabling reports:
tasks.withType<Test>().configureEach {
if (!project.hasProperty("createReports")) {
reports.html.required = false
reports.junitXml.required = false
build.gradle
tasks.withType(Test).configureEach {
if (!project.hasProperty("createReports")) {
reports.html.required = false
reports.junitXml.required = false
The Java compiler is fast. But if you’re compiling hundreds of Java classes, even a short compilation time adds up.
Gradle offers a several optimizations for Java compilation:
Gradle reuses this process within the duration the build, so the forking overhead is minimal.
By forking memory-intensive compilation into a separate process, we minimize garbage collection in the main Gradle process.
Less garbage collection means that Gradle’s infrastructure can run faster, especially when you also use parallel builds.
Forking compilation rarely impacts the performance of small projects.
But you should consider it if a single task compiles more than a thousand source files together.
Before Gradle 3.4, projects declared dependencies using the compile configuration.
This exposed all of those dependencies to downstream projects. In Gradle 3.4 and above,
you can separate downstream-facing api dependencies from internal-only implementation details.
Implementation dependencies don’t leak into the compile classpath of downstream projects.
When implementation details change, Gradle only recompiles api dependencies.
This can significantly reduce the "ripple" of recompilations caused by a single change in
large multi-project builds.
Some projects cannot easily upgrade to a current Gradle version. While you should
always upgrade Gradle to a recent version when possible, we recognize that it isn’t always
feasible for certain niche situations. In those select cases, check out these recommendations
to optimize older versions of Gradle.
Gradle 3.0 and above enable the Daemon by default. If you are using an older version,
you should update to the latest version of Gradle.
If you cannot update your Gradle version, you can enable the Daemon manually.
Gradle can analyze dependencies down to the individual class level
to recompile only the classes affected by a change.
Gradle 4.10 and above enable incremental compilation by default.
To enable incremental compilation by default in older Gradle versions, add the following setting to your
build.gradle file:
Often, updates only change internal implementation details of your code, like the body of a method.
These updates are known as ABI-compatible changes: they have no impact on the binary interface of your project.
In Gradle 3.4 and above, ABI-compatible changes no longer trigger recompiles of downstream projects.
This especially improves build times in large multi-project builds with deep dependency chains.
Upgrade to a Gradle version above 3.4 to benefit from compile avoidance.
If you use annotation processors, you need to explicitly declare them in order for compilation avoidance to work.
To learn more, check out the compile avoidance documentation.
Everything on this page applies to Android builds, since Android builds use Gradle.
Yet Android introduces unique opportunities for optimization.
For more information, check out the
Android team performance guide.
You can also watch the accompanying talk
from Google IO 2017.