神勇威武的四季豆 · pyqt中'NoneType' ...· 1 月前 · |
心软的饺子 · JavaScript中localStorag ...· 1 年前 · |
闯红灯的煎鸡蛋 · 如何手动解析vue单文件并预览? - 掘金· 1 年前 · |
玩命的松树 · Matlab ...· 1 年前 · |
正直的番茄 · 【若依(ruoyi)】table定制列宽_若 ...· 1 年前 · |
Page
and
Slice
Spring Data JPA provides repository support for the Jakarta Persistence API (JPA). It eases development of applications that need to access JPA data sources.
Version control: https://github.com/spring-projects/spring-data-jpa
Bugtracker: https://github.com/spring-projects/spring-data-jpa/issues
Milestone repository: https://repo.spring.io/milestone
Snapshot repository: https://repo.spring.io/snapshot
Instructions for how to upgrade from earlier versions of Spring Data are provided on the project wiki . Follow the links in the release notes section to find the version that you want to upgrade to.
Upgrading instructions are always the first item in the release notes. If you are more than one release behind, please make sure that you also review the release notes of the versions that you jumped.
Due to the different inception dates of individual Spring Data modules, most of them carry different major and minor version numbers. The easiest way to find compatible ones is to rely on the Spring Data Release Train BOM that we ship with the compatible versions defined. In a Maven project, you would declare this dependency in the
<dependencyManagement />
section of your POM as follows:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-bom</artifactId>
<version>2023.0.7</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
The current release train version is
2023.0.7
. The train version uses
calver
with the pattern
YYYY.MINOR.MICRO
.
The version name follows
${calver}
for GA releases and service releases and the following pattern for all other versions:
${calver}-${modifier}
, where
modifier
can be one of the following:
You can find a working example of using the BOMs in our
Spring Data examples repository
. With that in place, you can declare the Spring Data modules you would like to use without a version in the
<dependencies />
block, as follows:
<dependencies>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
</dependency>
<dependencies>
Spring Boot selects a recent version of the Spring Data modules for you. If you still want to upgrade to a newer version,
set the
spring-data-bom.version
property to the
train version and iteration
you would like to use.
See Spring Boot’s documentation (search for "Spring Data Bom") for more details.
The current version of Spring Data modules require Spring Framework 6.0.15 or better. The modules might also work with an older bugfix version of that minor version. However, using the most recent version within that generation is highly recommended.
This chapter explains the core concepts and interfaces of Spring Data repositories. The information in this chapter is pulled from the Spring Data Commons module. It uses the configuration and code samples for the Jakarta Persistence API (JPA) module. If you want to use XML configuration you should adapt the XML namespace declaration and the types to be extended to the equivalents of the particular module that you use. “ Namespace reference ” covers XML configuration, which is supported across all Spring Data modules that support the repository API. “ Repository query keywords ” covers the query method keywords supported by the repository abstraction in general. For detailed information on the specific features of your module, see the chapter on that module of this document.
The central interface in the Spring Data repository abstraction is
Repository
.
It takes the domain class to manage as well as the identifier type of the domain class as type arguments.
This interface acts primarily as a marker interface to capture the types to work with and to help you to discover interfaces that extend this one.
The
CrudRepository
and
ListCrudRepository
interfaces provide sophisticated CRUD functionality for the entity class that is being managed.
CrudRepository
Interface
public interface CrudRepository<T, ID> extends Repository<T, ID> {
<S extends T> S save(S entity); (1)
Optional<T> findById(ID primaryKey); (2)
Iterable<T> findAll(); (3)
long count(); (4)
void delete(T entity); (5)
boolean existsById(ID primaryKey); (6)
// … more functionality omitted.
We also provide persistence technology-specific abstractions, such as JpaRepository
or MongoRepository
.
Those interfaces extend CrudRepository
and expose the capabilities of the underlying persistence technology in addition to the rather generic persistence technology-agnostic interfaces such as CrudRepository
.
Additional to the CrudRepository
, there is a PagingAndSortingRepository
abstraction that adds additional methods to ease paginated access to entities:
Example 4. PagingAndSortingRepository
interface
public interface PagingAndSortingRepository<T, ID> {
Iterable<T> findAll(Sort sort);
Page<T> findAll(Pageable pageable);
In addition to pagination, scrolling provides a more fine-grained access to iterate through chunks of larger result sets.
In addition to query methods, query derivation for both count and delete queries is available.
The following list shows the interface definition for a derived count query:
Example 5. Derived Count Query
interface UserRepository extends CrudRepository<User, Long> {
long countByLastname(String lastname);
interface UserRepository extends CrudRepository<User, Long> {
long deleteByLastname(String lastname);
List<User> removeByLastname(String lastname);
4.2. Query Methods
Standard CRUD functionality repositories usually have queries on the underlying datastore.
With Spring Data, declaring those queries becomes a four-step process:
interface PersonRepository extends Repository<Person, Long> {
List<Person> findByLastname(String lastname);
import org.springframework.data.….repository.config.EnableJpaRepositories;
@EnableJpaRepositories
class Config { … }
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jpa="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/jpa
https://www.springframework.org/schema/data/jpa/spring-jpa.xsd">
<repositories base-package="com.acme.repositories"/>
</beans>
The JPA namespace is used in this example.
If you use the repository abstraction for any other store, you need to change this to the appropriate namespace declaration of your store module.
In other words, you should exchange jpa
in favor of, for example, mongodb
.
Note that the JavaConfig variant does not configure a package explicitly, because the package of the annotated class is used by default.
To customize the package to scan, use one of the basePackage…
attributes of the data-store-specific repository’s @EnableJpaRepositories
-annotation.
Inject the repository instance and use it, as shown in the following example:
4.3. Defining Repository Interfaces
To define a repository interface, you first need to define a domain class-specific repository interface.
The interface must extend Repository
and be typed to the domain class and an ID type.
If you want to expose CRUD methods for that domain type, you may extend CrudRepository
, or one of its variants instead of Repository
.
4.3.1. Fine-tuning Repository Definition
There are a few variants how you can get started with your repository interface.
The typical approach is to extend CrudRepository
, which gives you methods for CRUD functionality.
CRUD stands for Create, Read, Update, Delete.
With version 3.0 we also introduced ListCrudRepository
which is very similar to the CrudRepository
but for those methods that return multiple entities it returns a List
instead of an Iterable
which you might find easier to use.
If you are using a reactive store you might choose ReactiveCrudRepository
, or RxJava3CrudRepository
depending on which reactive framework you are using.
If you are using Kotlin you might pick CoroutineCrudRepository
which utilizes Kotlin’s coroutines.
Additional you can extend PagingAndSortingRepository
, ReactiveSortingRepository
, RxJava3SortingRepository
, or CoroutineSortingRepository
if you need methods that allow to specify a Sort
abstraction or in the first case a Pageable
abstraction.
Note that the various sorting repositories no longer extended their respective CRUD repository as they did in Spring Data Versions pre 3.0.
Therefore, you need to extend both interfaces if you want functionality of both.
If you do not want to extend Spring Data interfaces, you can also annotate your repository interface with @RepositoryDefinition
.
Extending one of the CRUD repository interfaces exposes a complete set of methods to manipulate your entities.
If you prefer to be selective about the methods being exposed, copy the methods you want to expose from the CRUD repository into your domain repository.
When doing so, you may change the return type of methods.
Spring Data will honor the return type if possible.
For example, for methods returning multiple entities you may choose Iterable<T>
, List<T>
, Collection<T>
or a VAVR list.
If many repositories in your application should have the same set of methods you can define your own base interface to inherit from.
Such an interface must be annotated with @NoRepositoryBean
.
This prevents Spring Data to try to create an instance of it directly and failing because it can’t determine the entity for that repository, since it still contains a generic type variable.
The following example shows how to selectively expose CRUD methods (findById
and save
, in this case):
Example 7. Selectively exposing CRUD methods
@NoRepositoryBean
interface MyBaseRepository<T, ID> extends Repository<T, ID> {
Optional<T> findById(ID id);
<S extends T> S save(S entity);
interface UserRepository extends MyBaseRepository<User, Long> {
User findByEmailAddress(EmailAddress emailAddress);
In the prior example, you defined a common base interface for all your domain repositories and exposed findById(…)
as well as save(…)
.These methods are routed into the base repository implementation of the store of your choice provided by Spring Data (for example, if you use JPA, the implementation is SimpleJpaRepository
), because they match the method signatures in CrudRepository
.
So the UserRepository
can now save users, find individual users by ID, and trigger a query to find Users
by email address.
4.3.2. Using Repositories with Multiple Spring Data Modules
Using a unique Spring Data module in your application makes things simple, because all repository interfaces in the defined scope are bound to the Spring Data module.
Sometimes, applications require using more than one Spring Data module.
In such cases, a repository definition must distinguish between persistence technologies.
When it detects multiple repository factories on the class path, Spring Data enters strict repository configuration mode.
Strict configuration uses details on the repository or the domain class to decide about Spring Data module binding for a repository definition:
If the repository definition extends the module-specific repository, it is a valid candidate for the particular Spring Data module.
If the domain class is annotated with the module-specific type annotation, it is a valid candidate for the particular Spring Data module.
Spring Data modules accept either third-party annotations (such as JPA’s @Entity
) or provide their own annotations (such as @Document
for Spring Data MongoDB and Spring Data Elasticsearch).
The following example shows a repository that uses module-specific interfaces (JPA in this case):
Example 8. Repository definitions using module-specific interfaces
interface MyRepository extends JpaRepository<User, Long> { }
@NoRepositoryBean
interface MyBaseRepository<T, ID> extends JpaRepository<T, ID> { … }
interface UserRepository extends MyBaseRepository<User, Long> { … }
AmbiguousRepository
and AmbiguousUserRepository
extend only Repository
and CrudRepository
in their type hierarchy.
While this is fine when using a unique Spring Data module, multiple modules cannot distinguish to which particular Spring Data these repositories should be bound.
The following bad example shows a repository that uses domain classes with mixed annotations:
Example 11. Repository definitions using domain classes with mixed annotations
interface JpaPersonRepository extends Repository<Person, Long> { … }
interface MongoDBPersonRepository extends Repository<Person, Long> { … }
@Entity
@Document
class Person { … }
This example shows a domain class using both JPA and Spring Data MongoDB annotations.
It defines two repositories, JpaPersonRepository
and MongoDBPersonRepository
.
One is intended for JPA and the other for MongoDB usage.
Spring Data is no longer able to tell the repositories apart, which leads to undefined behavior.
Repository type details and distinguishing domain class annotations are used for strict repository configuration to identify repository candidates for a particular Spring Data module.
Using multiple persistence technology-specific annotations on the same domain type is possible and enables reuse of domain types across multiple persistence technologies.
However, Spring Data can then no longer determine a unique module with which to bind the repository.
The last way to distinguish repositories is by scoping repository base packages.
Base packages define the starting points for scanning for repository interface definitions, which implies having repository definitions located in the appropriate packages.
By default, annotation-driven configuration uses the package of the configuration class.
The base package in XML-based configuration is mandatory.
The following example shows annotation-driven configuration of base packages:
Example 12. Annotation-driven configuration of base packages
@EnableJpaRepositories(basePackages = "com.acme.repositories.jpa")
@EnableMongoRepositories(basePackages = "com.acme.repositories.mongo")
class Configuration { … }
Available options depend on the actual store.
However, there must be a strategy that decides what actual query is created.
The next section describes the available options.
4.4.1. Query Lookup Strategies
The following strategies are available for the repository infrastructure to resolve the query.
With XML configuration, you can configure the strategy at the namespace through the query-lookup-strategy
attribute.
For Java configuration, you can use the queryLookupStrategy
attribute of the EnableJpaRepositories
annotation.
Some strategies may not be supported for particular datastores.
CREATE
attempts to construct a store-specific query from the query method name.
The general approach is to remove a given set of well known prefixes from the method name and parse the rest of the method.
You can read more about query construction in “Query Creation”.
USE_DECLARED_QUERY
tries to find a declared query and throws an exception if it cannot find one.
The query can be defined by an annotation somewhere or declared by other means.
See the documentation of the specific store to find available options for that store.
If the repository infrastructure does not find a declared query for the method at bootstrap time, it fails.
CREATE_IF_NOT_FOUND
(the default) combines CREATE
and USE_DECLARED_QUERY
.
It looks up a declared query first, and, if no declared query is found, it creates a custom method name-based query.
This is the default lookup strategy and, thus, is used if you do not configure anything explicitly.
It allows quick query definition by method names but also custom-tuning of these queries by introducing declared queries as needed.
4.4.2. Query Creation
The query builder mechanism built into the Spring Data repository infrastructure is useful for building constraining queries over entities of the repository.
The following example shows how to create a number of queries:
Example 13. Query creation from method names
interface PersonRepository extends Repository<Person, Long> {
List<Person> findByEmailAddressAndLastname(EmailAddress emailAddress, String lastname);
// Enables the distinct flag for the query
List<Person> findDistinctPeopleByLastnameOrFirstname(String lastname, String firstname);
List<Person> findPeopleDistinctByLastnameOrFirstname(String lastname, String firstname);
// Enabling ignoring case for an individual property
List<Person> findByLastnameIgnoreCase(String lastname);
// Enabling ignoring case for all suitable properties
List<Person> findByLastnameAndFirstnameAllIgnoreCase(String lastname, String firstname);
// Enabling static ORDER BY for a query
List<Person> findByLastnameOrderByFirstnameAsc(String lastname);
List<Person> findByLastnameOrderByFirstnameDesc(String lastname);
Parsing query method names is divided into subject and predicate.
The first part (find…By
, exists…By
) defines the subject of the query, the second part forms the predicate.
The introducing clause (subject) can contain further expressions.
Any text between find
(or other introducing keywords) and By
is considered to be descriptive unless using one of the result-limiting keywords such as a Distinct
to set a distinct flag on the query to be created or Top
/First
to limit query results.
The appendix contains the full list of query method subject keywords and query method predicate keywords including sorting and letter-casing modifiers.
However, the first By
acts as a delimiter to indicate the start of the actual criteria predicate.
At a very basic level, you can define conditions on entity properties and concatenate them with And
and Or
.
The actual result of parsing the method depends on the persistence store for which you create the query.
However, there are some general things to notice:
The expressions are usually property traversals combined with operators that can be concatenated.
You can combine property expressions with AND
and OR
.
You also get support for operators such as Between
, LessThan
, GreaterThan
, and Like
for the property expressions.
The supported operators can vary by datastore, so consult the appropriate part of your reference documentation.
The method parser supports setting an IgnoreCase
flag for individual properties (for example, findByLastnameIgnoreCase(…)
) or for all properties of a type that supports ignoring case (usually String
instances — for example, findByLastnameAndFirstnameAllIgnoreCase(…)
).
Whether ignoring cases is supported may vary by store, so consult the relevant sections in the reference documentation for the store-specific query method.
You can apply static ordering by appending an OrderBy
clause to the query method that references a property and by providing a sorting direction (Asc
or Desc
).
To create a query method that supports dynamic sorting, see “Paging, Iterating Large Results, Sorting & Limiting”.
4.4.3. Property Expressions
Property expressions can refer only to a direct property of the managed entity, as shown in the preceding example.
At query creation time, you already make sure that the parsed property is a property of the managed domain class.
However, you can also define constraints by traversing nested properties.
Consider the following method signature:
Assume a Person
has an Address
with a ZipCode
.
In that case, the method creates the x.address.zipCode
property traversal.
The resolution algorithm starts by interpreting the entire part (AddressZipCode
) as the property and checks the domain class for a property with that name (uncapitalized).
If the algorithm succeeds, it uses that property.
If not, the algorithm splits up the source at the camel-case parts from the right side into a head and a tail and tries to find the corresponding property — in our example, AddressZip
and Code
.
If the algorithm finds a property with that head, it takes the tail and continues building the tree down from there, splitting the tail up in the way just described.
If the first split does not match, the algorithm moves the split point to the left (Address
, ZipCode
) and continues.
Although this should work for most cases, it is possible for the algorithm to select the wrong property.
Suppose the Person
class has an addressZip
property as well.
The algorithm would match in the first split round already, choose the wrong property, and fail (as the type of addressZip
probably has no code
property).
To resolve this ambiguity you can use _
inside your method name to manually define traversal points.
So our method name would be as follows:
4.4.4. Paging, Iterating Large Results, Sorting & Limiting
To handle parameters in your query, define method parameters as already seen in the preceding examples.
Besides that, the infrastructure recognizes certain specific types like Pageable
, Sort
and Limit
, to apply pagination, sorting and limiting to your queries dynamically.
The following example demonstrates these features:
Example 14. Using Pageable
, Slice
, ScrollPosition
, Sort
and Limit
in query methods
Page<User> findByLastname(String lastname, Pageable pageable);
Slice<User> findByLastname(String lastname, Pageable pageable);
Window<User> findTop10ByLastname(String lastname, ScrollPosition position, Sort sort);
List<User> findByLastname(String lastname, Sort sort);
List<User> findByLastname(String lastname, Sort sort, Limit limit);
List<User> findByLastname(String lastname, Pageable pageable);
The first method lets you pass an org.springframework.data.domain.Pageable
instance to the query method to dynamically add paging to your statically defined query.
A Page
knows about the total number of elements and pages available.
It does so by the infrastructure triggering a count query to calculate the overall number.
As this might be expensive (depending on the store used), you can instead return a Slice
.
A Slice
knows only about whether a next Slice
is available, which might be sufficient when walking through a larger result set.
Sorting options are handled through the Pageable
instance, too.
If you need only sorting, add an org.springframework.data.domain.Sort
parameter to your method.
As you can see, returning a List
is also possible.
In this case, the additional metadata required to build the actual Page
instance is not created (which, in turn, means that the additional count query that would have been necessary is not issued).
Rather, it restricts the query to look up only the given range of entities.
To find out how many pages you get for an entire query, you have to trigger an additional count query.
By default, this query is derived from the query you actually trigger.
Special parameters may only be used once within a query method.
Some special parameters described above are mutually exclusive.
Please consider the following list of invalid parameter combinations.
Which Method is Appropriate?
The value provided by the Spring Data abstractions is perhaps best shown by the possible query method return types outlined in the following table below.
The table shows which types you can return from a query method
Table 1. Consuming Large Query Results
All results.
Single query.
Query results can exhaust all memory. Fetching all data can be time-intensive.
All results.
Single query.
Query results can exhaust all memory. Fetching all data can be time-intensive.
Chunked (one-by-one or in batches) depending on Stream
consumption.
Single query using typically cursors.
Streams must be closed after usage to avoid resource leaks.
Flux<T>
Chunked (one-by-one or in batches) depending on Flux
consumption.
Single query using typically cursors.
Store module must provide reactive infrastructure.
Slice<T>
Pageable.getPageSize() + 1
at Pageable.getOffset()
One to many queries fetching data starting at Pageable.getOffset()
applying limiting.
A Slice
can only navigate to the next Slice
.
Offset-based Window<T>
limit + 1
at OffsetScrollPosition.getOffset()
One to many queries fetching data starting at OffsetScrollPosition.getOffset()
applying limiting.
A Window
can only navigate to the next Window
.
Page<T>
Pageable.getPageSize()
at Pageable.getOffset()
One to many queries starting at Pageable.getOffset()
applying limiting. Additionally, COUNT(…)
query to determine the total number of elements can be required.
Often times, COUNT(…)
queries are required that are costly.
Keyset-based Window<T>
limit + 1
using a rewritten WHERE
condition
One to many queries fetching data starting at KeysetScrollPosition.getKeys()
applying limiting.
A Window
can only navigate to the next Window
.
You can define simple sorting expressions by using property names.
You can concatenate expressions to collect multiple criteria into one expression.
Example 15. Defining sort expressions
Sort sort = Sort.by("firstname").ascending()
.and(Sort.by("lastname").descending());
For a more type-safe way to define sort expressions, start with the type for which to define the sort expression and use method references to define the properties on which to sort.
Example 16. Defining sort expressions by using the type-safe API
TypedSort<Person> person = Sort.sort(Person.class);
Sort sort = person.by(Person::getFirstname).ascending()
.and(person.by(Person::getLastname).descending());
If your store implementation supports Querydsl, you can also use the generated metamodel types to define sort expressions:
Example 17. Defining sort expressions by using the Querydsl API
QSort sort = QSort.by(QPerson.firstname.asc())
.and(QSort.by(QPerson.lastname.desc()));
Scrolling
Scrolling is a more fine-grained approach to iterate through larger results set chunks.
Scrolling consists of a stable sort, a scroll type (Offset- or Keyset-based scrolling) and result limiting.
You can define simple sorting expressions by using property names and define static result limiting using the Top
or First
keyword through query derivation.
You can concatenate expressions to collect multiple criteria into one expression.
Scroll queries return a Window<T>
that allows obtaining the scroll position to resume to obtain the next Window<T>
until your application has consumed the entire query result.
Similar to consuming a Java Iterator<List<…>>
by obtaining the next batch of results, query result scrolling lets you access the a ScrollPosition
through Window.positionAt(…)
.
Window<User> users = repository.findFirst10ByLastnameOrderByFirstname("Doe", ScrollPosition.offset());
for (User u : users) {
// consume the user
// obtain the next Scroll
users = repository.findFirst10ByLastnameOrderByFirstname("Doe", users.positionAt(users.size() - 1));
} while (!users.isEmpty() && users.hasNext());
WindowIterator<User> users = WindowIterator.of(position -> repository.findFirst10ByLastnameOrderByFirstname("Doe", position))
.startingAt(ScrollPosition.offset());
while (users.hasNext()) {
User u = users.next();
// consume the user
Scrolling using Offset
Offset scrolling uses similar to pagination, an Offset counter to skip a number of results and let the data source only return results beginning at the given Offset.
This simple mechanism avoids large results being sent to the client application.
However, most databases require materializing the full query result before your server can return the results.
Example 18. Using Offset ScrollPosition
with Repository Query Methods
interface UserRepository extends Repository<User, Long> {
Window<User> findFirst10ByLastnameOrderByFirstname(String lastname, OffsetScrollPosition position);
WindowIterator<User> users = WindowIterator.of(position -> repository.findFirst10ByLastnameOrderByFirstname("Doe", position))
.startingAt(ScrollPosition.offset()); (1)
Scrolling using Keyset-Filtering
Offset-based requires most databases require materializing the entire result before your server can return the results.
So while the client only sees the portion of the requested results, your server needs to build the full result, which causes additional load.
Keyset-Filtering approaches result subset retrieval by leveraging built-in capabilities of your database aiming to reduce the computation and I/O requirements for individual queries.
This approach maintains a set of keys to resume scrolling by passing keys into the query, effectively amending your filter criteria.
The core idea of Keyset-Filtering is to start retrieving results using a stable sorting order.
Once you want to scroll to the next chunk, you obtain a ScrollPosition
that is used to reconstruct the position within the sorted result.
The ScrollPosition
captures the keyset of the last entity within the current Window
.
To run the query, reconstruction rewrites the criteria clause to include all sort fields and the primary key so that the database can leverage potential indexes to run the query.
The database needs only constructing a much smaller result from the given keyset position without the need to fully materialize a large result and then skipping results until reaching a particular offset.
Keyset-Filtering requires the keyset properties (those used for sorting) to be non-nullable.
This limitation applies due to the store specific null
value handling of comparison operators as well as the need to run queries against an indexed source.
Keyset-Filtering on nullable properties will lead to unexpected results.
interface UserRepository extends Repository<User, Long> {
Window<User> findFirst10ByLastnameOrderByFirstname(String lastname, KeysetScrollPosition position);
WindowIterator<User> users = WindowIterator.of(position -> repository.findFirst10ByLastnameOrderByFirstname("Doe", position))
.startingAt(ScrollPosition.keyset()); (1)
Keyset-Filtering works best when your database contains an index that matches the sort fields, hence a static sort works well.
Scroll queries applying Keyset-Filtering require to the properties used in the sort order to be returned by the query, and these must be mapped in the returned entity.
You can use interface and DTO projections, however make sure to include all properties that you’ve sorted by to avoid keyset extraction failures.
When specifying your Sort
order, it is sufficient to include sort properties relevant to your query;
You do not need to ensure unique query results if you do not want to.
The keyset query mechanism amends your sort order by including the primary key (or any remainder of composite primary keys) to ensure each query result is unique.
4.4.5. Limiting Query Results
You can limit the results of query methods by using the first
or top
keywords, which you can use interchangeably.
You can append an optional numeric value to top
or first
to specify the maximum result size to be returned.
If the number is left out, a result size of 1 is assumed.
The following example shows how to limit the query size:
Example 20. Limiting the result size of a query with Top
and First
User findFirstByOrderByLastnameAsc();
User findTopByOrderByAgeDesc();
Page<User> queryFirst10ByLastname(String lastname, Pageable pageable);
Slice<User> findTop3ByLastname(String lastname, Pageable pageable);
List<User> findFirst10ByLastname(String lastname, Sort sort);
List<User> findTop10ByLastname(String lastname, Pageable pageable);
The limiting expressions also support the Distinct
keyword for datastores that support distinct queries.
Also, for the queries that limit the result set to one instance, wrapping the result into with the Optional
keyword is supported.
If pagination or slicing is applied to a limiting query pagination (and the calculation of the number of available pages), it is applied within the limited result.
4.4.6. Repository Methods Returning Collections or Iterables
Query methods that return multiple results can use standard Java Iterable
, List
, and Set
.
Beyond that, we support returning Spring Data’s Streamable
, a custom extension of Iterable
, as well as collection types provided by Vavr.
Refer to the appendix explaining all possible query method return types.
Using Streamable as Query Method Return Type
You can use Streamable
as alternative to Iterable
or any collection type.
It provides convenience methods to access a non-parallel Stream
(missing from Iterable
) and the ability to directly ….filter(…)
and ….map(…)
over the elements and concatenate the Streamable
to others:
Example 21. Using Streamable to combine query method results
interface PersonRepository extends Repository<Person, Long> {
Streamable<Person> findByFirstnameContaining(String firstname);
Streamable<Person> findByLastnameContaining(String lastname);
Streamable<Person> result = repository.findByFirstnameContaining("av")
.and(repository.findByLastnameContaining("ea"));
Returning Custom Streamable Wrapper Types
Providing dedicated wrapper types for collections is a commonly used pattern to provide an API for a query result that returns multiple elements.
Usually, these types are used by invoking a repository method returning a collection-like type and creating an instance of the wrapper type manually.
You can avoid that additional step as Spring Data lets you use these wrapper types as query method return types if they meet the following criteria:
@RequiredArgsConstructor(staticName = "of")
class Products implements Streamable<Product> { (2)
private final Streamable<Product> streamable;
public MonetaryAmount getTotal() { (3)
return streamable.stream()
.map(Priced::getPrice)
.reduce(Money.of(0), MonetaryAmount::add);
@Override
public Iterator<Product> iterator() { (4)
return streamable.iterator();
interface ProductRepository implements Repository<Product, Long> {
Products findAllByDescriptionContaining(String text); (5)
A wrapper type for a Streamable<Product>
that can be constructed by using Products.of(…)
(factory method created with the Lombok annotation).
A standard constructor taking the Streamable<Product>
will do as well.
The wrapper type exposes an additional API, calculating new values on the Streamable<Product>
.
Implement the Streamable
interface and delegate to the actual result.
That wrapper type Products
can be used directly as a query method return type.
You do not need to return Streamable<Product>
and manually wrap it after the query in the repository client.
Support for Vavr Collections
Vavr is a library that embraces functional programming concepts in Java.
It ships with a custom set of collection types that you can use as query method return types, as the following table shows:
You can use the types in the first column (or subtypes thereof) as query method return types and get the types in the second column used as implementation type, depending on the Java type of the actual query result (third column).
Alternatively, you can declare Traversable
(the Vavr Iterable
equivalent), and we then derive the implementation class from the actual return value.
That is, a java.util.List
is turned into a Vavr List
or Seq
, a java.util.Set
becomes a Vavr LinkedHashSet
Set
, and so on.
4.4.7. Streaming Query Results
You can process the results of query methods incrementally by using a Java 8 Stream<T>
as the return type.
Instead of wrapping the query results in a Stream
, data store-specific methods are used to perform the streaming, as shown in the following example:
Example 22. Stream the result of a query with Java 8 Stream<T>
@Query("select u from User u")
Stream<User> findAllByCustomQueryAndStream();
Stream<User> readAllByFirstnameNotNull();
@Query("select u from User u")
Stream<User> streamAllPaged(Pageable pageable);
A Stream
potentially wraps underlying data store-specific resources and must, therefore, be closed after usage.
You can either manually close the Stream
by using the close()
method or by using a Java 7 try-with-resources
block, as shown in the following example:
try (Stream<User> stream = repository.findAllByCustomQueryAndStream()) {
stream.forEach(…);
4.4.8. Null Handling of Repository Methods
As of Spring Data 2.0, repository CRUD methods that return an individual aggregate instance use Java 8’s Optional
to indicate the potential absence of a value.
Besides that, Spring Data supports returning the following wrapper types on query methods:
Alternatively, query methods can choose not to use a wrapper type at all.
The absence of a query result is then indicated by returning null
.
Repository methods returning collections, collection alternatives, wrappers, and streams are guaranteed never to return null
but rather the corresponding empty representation.
See “Repository query return types” for details.
Nullability Annotations
You can express nullability constraints for repository methods by using Spring Framework’s nullability annotations.
They provide a tooling-friendly approach and opt-in null
checks during runtime, as follows:
@NonNullApi
: Used on the package level to declare that the default behavior for parameters and return values is, respectively, neither to accept nor to produce null
values.
@NonNull
: Used on a parameter or return value that must not be null
(not needed on a parameter and return value where @NonNullApi
applies).
@Nullable
: Used on a parameter or return value that can be null
.
Spring annotations are meta-annotated with JSR 305 annotations (a dormant but widely used JSR).
JSR 305 meta-annotations let tooling vendors (such as IDEA, Eclipse, and Kotlin) provide null-safety support in a generic way, without having to hard-code support for Spring annotations.
To enable runtime checking of nullability constraints for query methods, you need to activate non-nullability on the package level by using Spring’s @NonNullApi
in package-info.java
, as shown in the following example:
Example 24. Declaring Non-nullability in package-info.java
@org.springframework.lang.NonNullApi
package com.acme;
Once non-null defaulting is in place, repository query method invocations get validated at runtime for nullability constraints.
If a query result violates the defined constraint, an exception is thrown.
This happens when the method would return null
but is declared as non-nullable (the default with the annotation defined on the package in which the repository resides).
If you want to opt-in to nullable results again, selectively use @Nullable
on individual methods.
Using the result wrapper types mentioned at the start of this section continues to work as expected: an empty result is translated into the value that represents absence.
The following example shows a number of the techniques just described:
Example 25. Using different nullability constraints
package com.acme; (1)
import org.springframework.lang.Nullable;
interface UserRepository extends Repository<User, Long> {
User getByEmailAddress(EmailAddress emailAddress); (2)
@Nullable
User findByEmailAddress(@Nullable EmailAddress emailAdress); (3)
Optional<User> findOptionalByEmailAddress(EmailAddress emailAddress); (4)
The repository resides in a package (or sub-package) for which we have defined non-null behavior.
Throws an EmptyResultDataAccessException
when the query does not produce a result.
Throws an IllegalArgumentException
when the emailAddress
handed to the method is null
.
Returns null
when the query does not produce a result.
Also accepts null
as the value for emailAddress
.
Returns Optional.empty()
when the query does not produce a result.
Throws an IllegalArgumentException
when the emailAddress
handed to the method is null
.
Nullability in Kotlin-based Repositories
Kotlin has the definition of nullability constraints baked into the language.
Kotlin code compiles to bytecode, which does not express nullability constraints through method signatures but rather through compiled-in metadata.
Make sure to include the kotlin-reflect
JAR in your project to enable introspection of Kotlin’s nullability constraints.
Spring Data repositories use the language mechanism to define those constraints to apply the same runtime checks, as follows:
Example 26. Using nullability constraints on Kotlin repositories
interface UserRepository : Repository<User, String> {
fun findByUsername(username: String): User (1)
fun findByFirstname(firstname: String?): User? (2)
The method defines both the parameter and the result as non-nullable (the Kotlin default).
The Kotlin compiler rejects method invocations that pass null
to the method.
If the query yields an empty result, an EmptyResultDataAccessException
is thrown.
This method accepts null
for the firstname
parameter and returns null
if the query does not produce a result.
4.4.9. Asynchronous Query Results
You can run repository queries asynchronously by using Spring’s asynchronous method running capability.
This means the method returns immediately upon invocation while the actual query occurs in a task that has been submitted to a Spring TaskExecutor
.
Asynchronous queries differ from reactive queries and should not be mixed.
See the store-specific documentation for more details on reactive support.
The following example shows a number of asynchronous queries:
4.5. Creating Repository Instances
This section covers how to create instances and bean definitions for the defined repository interfaces.
4.5.1. Java Configuration
Use the store-specific @EnableJpaRepositories
annotation on a Java configuration class to define a configuration for repository activation.
For an introduction to Java-based configuration of the Spring container, see JavaConfig in the Spring reference documentation.
A sample configuration to enable Spring Data repositories resembles the following:
Example 27. Sample annotation-based repository configuration
@Configuration
@EnableJpaRepositories("com.acme.repositories")
class ApplicationConfiguration {
@Bean
EntityManagerFactory entityManagerFactory() {
4.5.2. XML Configuration
Each Spring Data module includes a repositories
element that lets you define a base package that Spring scans for you, as shown in the following example:
Example 28. Enabling Spring Data repositories via XML
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/jpa
https://www.springframework.org/schema/data/jpa/spring-jpa.xsd">
<jpa:repositories base-package="com.acme.repositories" />
</beans:beans>
In the preceding example, Spring is instructed to scan com.acme.repositories
and all its sub-packages for interfaces extending Repository
or one of its sub-interfaces.
For each interface found, the infrastructure registers the persistence technology-specific FactoryBean
to create the appropriate proxies that handle invocations of the query methods.
Each bean is registered under a bean name that is derived from the interface name, so an interface of UserRepository
would be registered under userRepository
.
Bean names for nested repository interfaces are prefixed with their enclosing type name.
The base package attribute allows wildcards so that you can define a pattern of scanned packages.
4.5.3. Using Filters
By default, the infrastructure picks up every interface that extends the persistence technology-specific Repository
sub-interface located under the configured base package and creates a bean instance for it.
However, you might want more fine-grained control over which interfaces have bean instances created for them.
To do so, use filter elements inside the repository declaration.
The semantics are exactly equivalent to the elements in Spring’s component filters.
For details, see the Spring reference documentation for these elements.
For example, to exclude certain interfaces from instantiation as repository beans, you could use the following configuration:
Example 29. Using filters
@Configuration
@EnableJpaRepositories(basePackages = "com.acme.repositories",
includeFilters = { @Filter(type = FilterType.REGEX, pattern = ".*SomeRepository") },
excludeFilters = { @Filter(type = FilterType.REGEX, pattern = ".*SomeOtherRepository") })
class ApplicationConfiguration {
@Bean
EntityManagerFactory entityManagerFactory() {
<repositories base-package="com.acme.repositories">
<context:exclude-filter type="regex" expression=".*SomeRepository" />
<context:include-filter type="regex" expression=".*SomeOtherRepository" />
</repositories>
4.5.4. Standalone Usage
You can also use the repository infrastructure outside of a Spring container — for example, in CDI environments. You still need some Spring libraries in your classpath, but, generally, you can set up repositories programmatically as well. The Spring Data modules that provide repository support ship with a persistence technology-specific RepositoryFactory
that you can use, as follows:
Example 30. Standalone usage of the repository factory
RepositoryFactorySupport factory = … // Instantiate factory here
UserRepository repository = factory.getRepository(UserRepository.class);
4.6. Custom Implementations for Spring Data Repositories
Spring Data provides various options to create query methods with little coding.
But when those options don’t fit your needs you can also provide your own custom implementation for repository methods.
This section describes how to do that.
4.6.1. Customizing Individual Repositories
To enrich a repository with custom functionality, you must first define a fragment interface and an implementation for the custom functionality, as follows:
Example 31. Interface for custom repository functionality
interface CustomizedUserRepository {
void someCustomMethod(User user);
class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
public void someCustomMethod(User user) {
// Your custom implementation
The implementation itself does not depend on Spring Data and can be a regular Spring bean.
Consequently, you can use standard dependency injection behavior to inject references to other beans (such as a JdbcTemplate
), take part in aspects, and so on.
Then you can let your repository interface extend the fragment interface, as follows:
Example 33. Changes to your repository interface
interface UserRepository extends CrudRepository<User, Long>, CustomizedUserRepository {
// Declare query methods here
Extending the fragment interface with your repository interface combines the CRUD and custom functionality and makes it available to clients.
Spring Data repositories are implemented by using fragments that form a repository composition.
Fragments are the base repository, functional aspects (such as QueryDsl), and custom interfaces along with their implementations.
Each time you add an interface to your repository interface, you enhance the composition by adding a fragment.
The base repository and repository aspect implementations are provided by each Spring Data module.
The following example shows custom interfaces and their implementations:
Example 34. Fragments with their implementations
interface HumanRepository {
void someHumanMethod(User user);
class HumanRepositoryImpl implements HumanRepository {
public void someHumanMethod(User user) {
// Your custom implementation
interface ContactRepository {
void someContactMethod(User user);
User anotherContactMethod(User user);
class ContactRepositoryImpl implements ContactRepository {
public void someContactMethod(User user) {
// Your custom implementation
public User anotherContactMethod(User user) {
// Your custom implementation
The following example shows the interface for a custom repository that extends CrudRepository
:
Example 35. Changes to your repository interface
interface UserRepository extends CrudRepository<User, Long>, HumanRepository, ContactRepository {
// Declare query methods here
Repositories may be composed of multiple custom implementations that are imported in the order of their declaration.
Custom implementations have a higher priority than the base implementation and repository aspects.
This ordering lets you override base repository and aspect methods and resolves ambiguity if two fragments contribute the same method signature.
Repository fragments are not limited to use in a single repository interface.
Multiple repositories may use a fragment interface, letting you reuse customizations across different repositories.
The following example shows a repository fragment and its implementation:
Example 36. Fragments overriding save(…)
interface CustomizedSave<T> {
<S extends T> S save(S entity);
class CustomizedSaveImpl<T> implements CustomizedSave<T> {
public <S extends T> S save(S entity) {
// Your custom implementation
interface PersonRepository extends CrudRepository<Person, Long>, CustomizedSave<Person> {
Configuration
The repository infrastructure tries to autodetect custom implementation fragments by scanning for classes below the package in which it found a repository.
These classes need to follow the naming convention of appending a postfix defaulting to Impl
.
The following example shows a repository that uses the default postfix and a repository that sets a custom value for the postfix:
Example 38. Configuration example
@EnableJpaRepositories(repositoryImplementationPostfix = "MyPostfix")
class Configuration { … }
The first configuration in the preceding example tries to look up a class called com.acme.repository.CustomizedUserRepositoryImpl
to act as a custom repository implementation.
The second example tries to look up com.acme.repository.CustomizedUserRepositoryMyPostfix
.
Resolution of Ambiguity
If multiple implementations with matching class names are found in different packages, Spring Data uses the bean names to identify which one to use.
Given the following two custom implementations for the CustomizedUserRepository
shown earlier, the first implementation is used.
Its bean name is customizedUserRepositoryImpl
, which matches that of the fragment interface (CustomizedUserRepository
) plus the postfix Impl
.
Example 39. Resolution of ambiguous implementations
package com.acme.impl.one;
class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
// Your custom implementation
@Component("specialCustomImpl")
class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
// Your custom implementation
Manual Wiring
If your custom implementation uses annotation-based configuration and autowiring only, the preceding approach shown works well, because it is treated as any other Spring bean.
If your implementation fragment bean needs special wiring, you can declare the bean and name it according to the conventions described in the preceding section.
The infrastructure then refers to the manually defined bean definition by name instead of creating one itself.
The following example shows how to manually wire a custom implementation:
Example 40. Manual wiring of custom implementations
class MyClass {
MyClass(@Qualifier("userRepositoryImpl") UserRepository userRepository) {
<repositories base-package="com.acme.repository" />
<beans:bean id="userRepositoryImpl" class="…">
<!-- further configuration -->
</beans:bean>
4.6.2. Customize the Base Repository
The approach described in the preceding section requires customization of each repository interfaces when you want to customize the base repository behavior so that all repositories are affected.
To instead change behavior for all repositories, you can create an implementation that extends the persistence technology-specific repository base class.
This class then acts as a custom base class for the repository proxies, as shown in the following example:
Example 41. Custom repository base class
class MyRepositoryImpl<T, ID>
extends SimpleJpaRepository<T, ID> {
private final EntityManager entityManager;
MyRepositoryImpl(JpaEntityInformation entityInformation,
EntityManager entityManager) {
super(entityInformation, entityManager);
// Keep the EntityManager around to used from the newly introduced methods.
this.entityManager = entityManager;
@Transactional
public <S extends T> S save(S entity) {
// implementation goes here
The class needs to have a constructor of the super class which the store-specific repository factory implementation uses.
If the repository base class has multiple constructors, override the one taking an EntityInformation
plus a store specific infrastructure object (such as an EntityManager
or a template class).
The final step is to make the Spring Data infrastructure aware of the customized repository base class.
In configuration, you can do so by using the repositoryBaseClass
, as shown in the following example:
Example 42. Configuring a custom repository base class
@Configuration
@EnableJpaRepositories(repositoryBaseClass = MyRepositoryImpl.class)
class ApplicationConfiguration { … }
4.7. Publishing Events from Aggregate Roots
Entities managed by repositories are aggregate roots.
In a Domain-Driven Design application, these aggregate roots usually publish domain events.
Spring Data provides an annotation called @DomainEvents
that you can use on a method of your aggregate root to make that publication as easy as possible, as shown in the following example:
Example 43. Exposing domain events from an aggregate root
class AnAggregateRoot {
@DomainEvents (1)
Collection<Object> domainEvents() {
// … return events you want to get published here
@AfterDomainEventPublication (2)
void callbackMethod() {
// … potentially clean up domain events list
The method that uses @DomainEvents
can return either a single event instance or a collection of events.
It must not take any arguments.
After all events have been published, we have a method annotated with @AfterDomainEventPublication
.
You can use it to potentially clean the list of events to be published (among other uses).
4.8. Spring Data Extensions
This section documents a set of Spring Data extensions that enable Spring Data usage in a variety of contexts.
Currently, most of the integration is targeted towards Spring MVC.
4.8.1. Querydsl Extension
Querydsl is a framework that enables the construction of statically typed SQL-like queries through its fluent API.
Several Spring Data modules offer integration with Querydsl through QuerydslPredicateExecutor
, as the following example shows:
Example 44. QuerydslPredicateExecutor interface
public interface QuerydslPredicateExecutor<T> {
Optional<T> findById(Predicate predicate); (1)
Iterable<T> findAll(Predicate predicate); (2)
long count(Predicate predicate); (3)
boolean exists(Predicate predicate); (4)
// … more functionality omitted.
To use the Querydsl support, extend QuerydslPredicateExecutor
on your repository interface, as the following example shows:
Example 45. Querydsl integration on repositories
interface UserRepository extends CrudRepository<User, Long>, QuerydslPredicateExecutor<User> {
Predicate predicate = user.firstname.equalsIgnoreCase("dave")
.and(user.lastname.startsWithIgnoreCase("mathews"));
userRepository.findAll(predicate);
4.8.2. Web support
Spring Data modules that support the repository programming model ship with a variety of web support.
The web related components require Spring MVC JARs to be on the classpath.
Some of them even provide integration with Spring HATEOAS.
In general, the integration support is enabled by using the @EnableSpringDataWebSupport
annotation in your JavaConfig configuration class, as the following example shows:
Example 46. Enabling Spring Data web support
@Configuration
@EnableWebMvc
@EnableSpringDataWebSupport
class WebConfiguration {}
<bean class="org.springframework.data.web.config.SpringDataWebConfiguration" />
<!-- If you use Spring HATEOAS, register this one *instead* of the former -->
<bean class="org.springframework.data.web.config.HateoasAwareSpringDataWebConfiguration" />
The @EnableSpringDataWebSupport
annotation registers a few components.
We discuss those later in this section.
It also detects Spring HATEOAS on the classpath and registers integration components (if present) for it as well.
Basic Web Support
Enabling Spring Data web support in XML
The configuration shown in the previous section registers a few basic components:
A Using the DomainClassConverter
Class to let Spring MVC resolve instances of repository-managed domain classes from request parameters or path variables.
HandlerMethodArgumentResolver
implementations to let Spring MVC resolve Pageable
and Sort
instances from request parameters.
Jackson Modules to de-/serialize types like Point
and Distance
, or store specific ones, depending on the Spring Data Module used.
Using the DomainClassConverter
Class
The DomainClassConverter
class lets you use domain types in your Spring MVC controller method signatures directly so that you need not manually lookup the instances through the repository, as the following example shows:
Example 47. A Spring MVC controller using domain types in method signatures
@Controller
@RequestMapping("/users")
class UserController {
@RequestMapping("/{id}")
String showUserForm(@PathVariable("id") User user, Model model) {
model.addAttribute("user", user);
return "userForm";
HandlerMethodArgumentResolvers for Pageable and Sort
The configuration snippet shown in the previous section also registers a PageableHandlerMethodArgumentResolver
as well as an instance of SortHandlerMethodArgumentResolver
.
The registration enables Pageable
and Sort
as valid controller method arguments, as the following example shows:
Example 48. Using Pageable as a controller method argument
@Controller
@RequestMapping("/users")
class UserController {
private final UserRepository repository;
UserController(UserRepository repository) {
this.repository = repository;
@RequestMapping
String showUsers(Model model, Pageable pageable) {
model.addAttribute("users", repository.findAll(pageable));
return "users";
The preceding method signature causes Spring MVC try to derive a Pageable
instance from the request parameters by using the following default configuration:
Table 2. Request parameters evaluated for Pageable
instances
Properties that should be sorted by in the format property,property(,ASC|DESC)(,IgnoreCase)
. The default sort direction is case-sensitive ascending. Use multiple sort
parameters if you want to switch direction or case sensitivity — for example, ?sort=firstname&sort=lastname,asc&sort=city,ignorecase
.
To customize this behavior, register a bean that implements the PageableHandlerMethodArgumentResolverCustomizer
interface or the SortHandlerMethodArgumentResolverCustomizer
interface, respectively.
Its customize()
method gets called, letting you change settings, as the following example shows:
@Bean SortHandlerMethodArgumentResolverCustomizer sortCustomizer() {
return s -> s.setPropertyDelimiter("<-->");
If setting the properties of an existing MethodArgumentResolver
is not sufficient for your purpose, extend either SpringDataWebConfiguration
or the HATEOAS-enabled equivalent, override the pageableResolver()
or sortResolver()
methods, and import your customized configuration file instead of using the @Enable
annotation.
If you need multiple Pageable
or Sort
instances to be resolved from the request (for multiple tables, for example), you can use Spring’s @Qualifier
annotation to distinguish one from another.
The request parameters then have to be prefixed with ${qualifier}_
.
The following example shows the resulting method signature:
Hypermedia Support for Page
and Slice
Spring HATEOAS ships with a representation model class (PagedModel
/SlicedModel
) that allows enriching the content of a Page
or Slice
instance with the necessary Page
/Slice
metadata as well as links to let the clients easily navigate the pages.
The conversion of a Page
to a PagedModel
is done by an implementation of the Spring HATEOAS RepresentationModelAssembler
interface, called the PagedResourcesAssembler
.
Similarly Slice
instances can be converted to a SlicedModel
using a SlicedResourcesAssembler
.
The following example shows how to use a PagedResourcesAssembler
as a controller method argument, as the SlicedResourcesAssembler
works exactly the same:
Example 49. Using a PagedResourcesAssembler as controller method argument
@Controller
class PersonController {
private final PersonRepository repository;
// Constructor omitted
@GetMapping("/people")
HttpEntity<PagedModel<Person>> people(Pageable pageable,
PagedResourcesAssembler assembler) {
Page<Person> people = repository.findAll(pageable);
return ResponseEntity.ok(assembler.toModel(people));
Enabling the configuration, as shown in the preceding example, lets the PagedResourcesAssembler
be used as a controller method argument.
Calling toModel(…)
on it has the following effects:
The PagedModel
object gets a PageMetadata
instance attached, and it is populated with information from the Page
and the underlying Pageable
.
The PagedModel
may get prev
and next
links attached, depending on the page’s state.
The links point to the URI to which the method maps.
The pagination parameters added to the method match the setup of the PageableHandlerMethodArgumentResolver
to make sure the links can be resolved later.
{ "links" : [
{ "rel" : "next", "href" : "http://localhost:8080/persons?page=1&size=20" }
"content" : [
… // 20 Person instances rendered here
"pageMetadata" : {
"size" : 20,
"totalElements" : 30,
"totalPages" : 2,
"number" : 0
The JSON envelope format shown here doesn’t follow any formally specified structure and it’s not guaranteed stable and we might change it at any time.
It’s highly recommended to enable the rendering as a hypermedia-enabled, official media type, supported by Spring HATEOAS, like
HAL.
Those can be activated by using its @EnableHypermediaSupport
annotation.
Find more information in the Spring HATEOAS reference documentation.
The assembler produced the correct URI and also picked up the default configuration to resolve the parameters into a Pageable
for an upcoming request.
This means that, if you change that configuration, the links automatically adhere to the change.
By default, the assembler points to the controller method it was invoked in, but you can customize that by passing a custom Link
to be used as base to build the pagination links, which overloads the PagedResourcesAssembler.toModel(…)
method.
Spring Data Jackson Modules
The core module, and some of the store specific ones, ship with a set of Jackson Modules for types, like org.springframework.data.geo.Distance
and org.springframework.data.geo.Point
, used by the Spring Data domain.
Those Modules are imported once web support is enabled and com.fasterxml.jackson.databind.ObjectMapper
is available.
During initialization SpringDataJacksonModules
, like the SpringDataJacksonConfiguration
, get picked up by the infrastructure, so that the declared com.fasterxml.jackson.databind.Module
s are made available to the Jackson ObjectMapper
.
Data binding mixins for the following domain types are registered by the common infrastructure.
org.springframework.data.geo.Distance
org.springframework.data.geo.Point
org.springframework.data.geo.Box
org.springframework.data.geo.Circle
org.springframework.data.geo.Polygon
Web Databinding Support
You can use Spring Data projections (described in Projections) to bind incoming request payloads by using either JSONPath expressions (requires Jayway JsonPath) or XPath expressions (requires XmlBeam), as the following example shows:
Example 50. HTTP payload binding using JSONPath or XPath expressions
@ProjectedPayload
public interface UserPayload {
@XBRead("//firstname")
@JsonPath("$..firstname")
String getFirstname();
@XBRead("/lastname")
@JsonPath({ "$.lastname", "$.user.lastname" })
String getLastname();
You can use the type shown in the preceding example as a Spring MVC handler method argument or by using ParameterizedTypeReference
on one of methods of the RestTemplate
.
The preceding method declarations would try to find firstname
anywhere in the given document.
The lastname
XML lookup is performed on the top-level of the incoming document.
The JSON variant of that tries a top-level lastname
first but also tries lastname
nested in a user
sub-document if the former does not return a value.
That way, changes in the structure of the source document can be mitigated easily without having clients calling the exposed methods (usually a drawback of class-based payload binding).
Nested projections are supported as described in Projections.
If the method returns a complex, non-interface type, a Jackson ObjectMapper
is used to map the final value.
For Spring MVC, the necessary converters are registered automatically as soon as @EnableSpringDataWebSupport
is active and the required dependencies are available on the classpath.
For usage with RestTemplate
, register a ProjectingJackson2HttpMessageConverter
(JSON) or XmlBeamHttpMessageConverter
manually.
For more information, see the web projection example in the canonical Spring Data Examples repository.
Querydsl Web Support
For those stores that have QueryDSL integration, you can derive queries from the attributes contained in a Request
query string.
Consider the following query string:
@Autowired UserRepository repository;
@RequestMapping(value = "/", method = RequestMethod.GET)
String index(Model model, @QuerydslPredicate(root = User.class) Predicate predicate, (1)
Pageable pageable, @RequestParam MultiValueMap<String, String> parameters) {
model.addAttribute("users", repository.findAll(predicate, pageable));
return "index";
interface UserRepository extends CrudRepository<User, String>,
QuerydslPredicateExecutor<User>, (1)
QuerydslBinderCustomizer<QUser> { (2)
@Override
default void customize(QuerydslBindings bindings, QUser user) {
bindings.bind(user.username).first((path, value) -> path.contains(value)) (3)
bindings.bind(String.class)
.first((StringPath path, String value) -> path.containsIgnoreCase(value)); (4)
bindings.excluding(user.password); (5)
QuerydslBinderCustomizer
defined on the repository interface is automatically picked up and shortcuts @QuerydslPredicate(bindings=…)
.
Define the binding for the username
property to be a simple contains
binding.
Define the default binding for String
properties to be a case-insensitive contains
match.
Exclude the password
property from Predicate
resolution.
4.8.3. Repository Populators
If you work with the Spring JDBC module, you are probably familiar with the support for populating a DataSource
with SQL scripts.
A similar abstraction is available on the repositories level, although it does not use SQL as the data definition language because it must be store-independent.
Thus, the populators support XML (through Spring’s OXM abstraction) and JSON (through Jackson) to define data with which to populate the repositories.
Assume you have a file called data.json
with the following content:
Example 51. Data defined in JSON
[ { "_class" : "com.acme.Person",
"firstname" : "Dave",
"lastname" : "Matthews" },
{ "_class" : "com.acme.Person",
"firstname" : "Carter",
"lastname" : "Beauford" } ]
You can populate your repositories by using the populator elements of the repository namespace provided in Spring Data Commons.
To populate the preceding data to your PersonRepository
, declare a populator similar to the following:
Example 52. Declaring a Jackson repository populator
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:repository="http://www.springframework.org/schema/data/repository"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/repository
https://www.springframework.org/schema/data/repository/spring-repository.xsd">
<repository:jackson2-populator locations="classpath:data.json" />
</beans>
The preceding declaration causes the data.json
file to be read and deserialized by a Jackson ObjectMapper
.
The type to which the JSON object is unmarshalled is determined by inspecting the _class
attribute of the JSON document.
The infrastructure eventually selects the appropriate repository to handle the object that was deserialized.
To instead use XML to define the data the repositories should be populated with, you can use the unmarshaller-populator
element.
You configure it to use one of the XML marshaller options available in Spring OXM. See the Spring reference documentation for details.
The following example shows how to unmarshall a repository populator with JAXB:
Example 53. Declaring an unmarshalling repository populator (using JAXB)
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:repository="http://www.springframework.org/schema/data/repository"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/repository
https://www.springframework.org/schema/data/repository/spring-repository.xsd
http://www.springframework.org/schema/oxm
https://www.springframework.org/schema/oxm/spring-oxm.xsd">
<repository:unmarshaller-populator locations="classpath:data.json"
unmarshaller-ref="unmarshaller" />
<oxm:jaxb2-marshaller contextPath="com.acme" />
</beans>
5.1. JPA Repositories
This chapter points out the specialties for repository support for JPA. This builds on the core repository support explained in “Working with Spring Data Repositories”. Make sure you have a sound understanding of the basic concepts explained there.
5.1.1. Introduction
This section describes the basics of configuring Spring Data JPA through either:
Annotation-based Configuration
The Spring Data JPA repositories support can be activated through both JavaConfig as well as a custom XML namespace, as shown in the following example:
Example 54. Spring Data JPA repositories using JavaConfig
@Configuration
@EnableJpaRepositories
@EnableTransactionManagement
class ApplicationConfig {
@Bean
public DataSource dataSource() {
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL).build();
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setGenerateDdl(true);
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setJpaVendorAdapter(vendorAdapter);
factory.setPackagesToScan("com.acme.domain");
factory.setDataSource(dataSource());
return factory;
@Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager txManager = new JpaTransactionManager();
txManager.setEntityManagerFactory(entityManagerFactory);
return txManager;
The preceding configuration class sets up an embedded HSQL database by using the EmbeddedDatabaseBuilder
API of spring-jdbc
. Spring Data then sets up an EntityManagerFactory
and uses Hibernate as the sample persistence provider. The last infrastructure component declared here is the JpaTransactionManager
. Finally, the example activates Spring Data JPA repositories by using the @EnableJpaRepositories
annotation, which essentially carries the same attributes as the XML namespace. If no base package is configured, it uses the one in which the configuration class resides.
Spring Namespace
The JPA module of Spring Data contains a custom namespace that allows defining repository beans. It also contains certain features and element attributes that are special to JPA. Generally, the JPA repositories can be set up by using the repositories
element, as shown in the following example:
Example 55. Setting up JPA repositories by using the namespace
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jpa="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/jpa
https://www.springframework.org/schema/data/jpa/spring-jpa.xsd">
<jpa:repositories base-package="com.acme.repositories" />
</beans>
Which is better, JavaConfig or XML? XML is how Spring was configured long ago. In today’s era of fast-growing Java, record types, annotations, and more, new projects typically use as much pure Java as possible. While there is no immediate plan to remove XML support, some of the newest features MAY not be available through XML.
Using the repositories
element looks up Spring Data repositories as described in “Creating Repository Instances”. Beyond that, it activates persistence exception translation for all beans annotated with @Repository
, to let exceptions being thrown by the JPA persistence providers be converted into Spring’s DataAccessException
hierarchy.
Custom Namespace Attributes
Beyond the default attributes of the repositories
element, the JPA namespace offers additional attributes to let you gain more detailed control over the setup of the repositories:
Table 3. Custom JPA-specific attributes of the repositories
element
entity-manager-factory-ref
Explicitly wire the EntityManagerFactory
to be used with the repositories being detected by the repositories
element. Usually used if multiple EntityManagerFactory
beans are used within the application. If not configured, Spring Data automatically looks up the EntityManagerFactory
bean with the name entityManagerFactory
in the ApplicationContext
.
transaction-manager-ref
Explicitly wire the PlatformTransactionManager
to be used with the repositories being detected by the repositories
element. Usually only necessary if multiple transaction managers or EntityManagerFactory
beans have been configured. Default to a single defined PlatformTransactionManager
inside the current ApplicationContext
.
By default, Spring Data JPA repositories are default Spring beans.
They are singleton scoped and eagerly initialized.
During startup, they already interact with the JPA EntityManager
for verification and metadata analysis purposes.
Spring Framework supports the initialization of the JPA EntityManagerFactory
in a background thread because that process usually takes up a significant amount of startup time in a Spring application.
To make use of that background initialization effectively, we need to make sure that JPA repositories are initialized as late as possible.
As of Spring Data JPA 2.1 you can now configure a BootstrapMode
(either via the @EnableJpaRepositories
annotation or the XML namespace) that takes the following values:
DEFAULT
(default) — Repositories are instantiated eagerly unless explicitly annotated with @Lazy
.
The lazification only has effect if no client bean needs an instance of the repository as that will require the initialization of the repository bean.
LAZY
— Implicitly declares all repository beans lazy and also causes lazy initialization proxies to be created to be injected into client beans.
That means, that repositories will not get instantiated if the client bean is simply storing the instance in a field and not making use of the repository during initialization.
Repository instances will be initialized and verified upon first interaction with the repository.
DEFERRED
— Fundamentally the same mode of operation as LAZY
, but triggering repository initialization in response to an ContextRefreshedEvent
so that repositories are verified before the application has completely started.
Recommendations
If you’re not using asynchronous JPA bootstrap stick with the default bootstrap mode.
In case you bootstrap JPA asynchronously, DEFERRED
is a reasonable default as it will make sure the Spring Data JPA bootstrap only waits for the EntityManagerFactory
setup if that itself takes longer than initializing all other application components.
Still, it makes sure that repositories are properly initialized and validated before the application signals it’s up.
LAZY
is a decent choice for testing scenarios and local development.
Once you are pretty sure that repositories can properly bootstrap, or in cases where you are testing other parts of the application, running verification for all repositories might unnecessarily increase the startup time.
The same applies to local development in which you only access parts of the application that might need to have a single repository initialized.
5.1.2. Persisting Entities
This section describes how to persist (save) entities with Spring Data JPA.
Saving Entities
Saving an entity can be performed with the CrudRepository.save(…)
method. It persists or merges the given entity by using the underlying JPA EntityManager
. If the entity has not yet been persisted, Spring Data JPA saves the entity with a call to the entityManager.persist(…)
method. Otherwise, it calls the entityManager.merge(…)
method.
Entity State-detection Strategies
Spring Data JPA offers the following strategies to detect whether an entity is new or not:
Version-Property and Id-Property inspection (default):
By default Spring Data JPA inspects first if there is a Version-property of non-primitive type.
If there is, the entity is considered new if the value of that property is null
.
Without such a Version-property Spring Data JPA inspects the identifier property of the given entity.
If the identifier property is null
, then the entity is assumed to be new.
Otherwise, it is assumed to be not new.
Implementing Persistable
: If an entity implements Persistable
, Spring Data JPA delegates the new detection to the isNew(…)
method of the entity. See the JavaDoc for details.
Implementing EntityInformation
: You can customize the EntityInformation
abstraction used in the SimpleJpaRepository
implementation by creating a subclass of JpaRepositoryFactory
and overriding the getEntityInformation(…)
method accordingly. You then have to register the custom implementation of JpaRepositoryFactory
as a Spring bean. Note that this should be rarely necessary. See the JavaDoc for details.
Option 1 is not an option for entities that use manually assigned identifiers and no version attribute as with those the identifier will always be non-null
.
A common pattern in that scenario is to use a common base class with a transient flag defaulting to indicate a new instance and using JPA lifecycle callbacks to flip that flag on persistence operations:
Example 56. A base class for entities with manually assigned identifiers
@MappedSuperclass
public abstract class AbstractEntity<ID> implements Persistable<ID> {
@Transient
private boolean isNew = true; (1)
@Override
public boolean isNew() {
return isNew; (2)
@PrePersist (3)
@PostLoad
void markNotNew() {
this.isNew = false;
// More code…
Declare a flag to hold the new state. Transient so that it’s not persisted to the database.
Return the flag in the implementation of Persistable.isNew()
so that Spring Data repositories know whether to call EntityManager.persist()
or ….merge()
.
Declare a method using JPA entity callbacks so that the flag is switched to indicate an existing entity after a repository call to save(…)
or an instance creation by the persistence provider.
5.1.3. Query Methods
This section describes the various ways to create a query with Spring Data JPA.
Query Lookup Strategies
The JPA module supports defining a query manually as a String or having it being derived from the method name.
Derived queries with the predicates IsStartingWith
, StartingWith
, StartsWith
, IsEndingWith
, EndingWith
, EndsWith
,
IsNotContaining
, NotContaining
, NotContains
, IsContaining
, Containing
, Contains
the respective arguments for these queries will get sanitized.
This means if the arguments actually contain characters recognized by LIKE
as wildcards these will get escaped so they match only as literals.
The escape character used can be configured by setting the escapeCharacter
of the @EnableJpaRepositories
annotation.
Compare with Using SpEL Expressions.
Declared Queries
Although getting a query derived from the method name is quite convenient, one might face the situation in which either the method name parser does not support the keyword one wants to use or the method name would get unnecessarily ugly. So you can either use JPA named queries through a naming convention (see Using JPA Named Queries for more information) or rather annotate your query method with @Query
(see Using @Query
for details).
Query Creation
Generally, the query creation mechanism for JPA works as described in “Query Methods”. The following example shows what a JPA query method translates into:
Example 57. Query creation from method names
public interface UserRepository extends Repository<User, Long> {
List<User> findByEmailAddressAndLastname(String emailAddress, String lastname);
We create a query using the JPA criteria API from this, but, essentially, this translates into the following query: select u from User u where u.emailAddress = ?1 and u.lastname = ?2
. Spring Data JPA does a property check and traverses nested properties, as described in “Property Expressions”.
The following table describes the keywords supported for JPA and what a method containing that keyword translates to:
Table 4. Supported keywords inside method names
Distinct
findDistinctByLastnameAndFirstname
select distinct … where x.lastname = ?1 and x.firstname = ?2
findByLastnameAndFirstname
… where x.lastname = ?1 and x.firstname = ?2
findByLastnameOrFirstname
… where x.lastname = ?1 or x.firstname = ?2
Is
, Equals
findByFirstname
,findByFirstnameIs
,findByFirstnameEquals
… where x.firstname = ?1
Between
findByStartDateBetween
… where x.startDate between ?1 and ?2
LessThan
findByAgeLessThan
… where x.age < ?1
LessThanEqual
findByAgeLessThanEqual
… where x.age <= ?1
GreaterThan
findByAgeGreaterThan
… where x.age > ?1
GreaterThanEqual
findByAgeGreaterThanEqual
… where x.age >= ?1
After
findByStartDateAfter
… where x.startDate > ?1
Before
findByStartDateBefore
… where x.startDate < ?1
IsNull
, Null
findByAge(Is)Null
… where x.age is null
IsNotNull
, NotNull
findByAge(Is)NotNull
… where x.age not null
findByFirstnameLike
… where x.firstname like ?1
NotLike
findByFirstnameNotLike
… where x.firstname not like ?1
StartingWith
findByFirstnameStartingWith
… where x.firstname like ?1
(parameter bound with appended %
)
EndingWith
findByFirstnameEndingWith
… where x.firstname like ?1
(parameter bound with prepended %
)
Containing
findByFirstnameContaining
… where x.firstname like ?1
(parameter bound wrapped in %
)
OrderBy
findByAgeOrderByLastnameDesc
… where x.age = ?1 order by x.lastname desc
findByLastnameNot
… where x.lastname <> ?1
findByAgeIn(Collection<Age> ages)
… where x.age in ?1
NotIn
findByAgeNotIn(Collection<Age> ages)
… where x.age not in ?1
findByActiveTrue()
… where x.active = true
False
findByActiveFalse()
… where x.active = false
IgnoreCase
findByFirstnameIgnoreCase
… where UPPER(x.firstname) = UPPER(?1)
DISTINCT
can be tricky and not always producing the results you expect.
For example, select distinct u from User u
will produce a complete different result than select distinct u.lastname from User u
.
In the first case, since you are including User.id
, nothing will duplicated, hence you’ll get the whole table, and it would be of User
objects.
However, that latter query would narrow the focus to just User.lastname
and find all unique last names for that table.
This would also yield a List<String>
result set instead of a List<User
> result set.
countDistinctByLastname(String lastname)
can also produce unexpected results.
Spring Data JPA will derive select count(distinct u.id) from User u where u.lastname = ?1
.
Again, since u.id
won’t hit any duplicates, this query will count up all the users that had the binding last name.
Which would the same as countByLastname(String lastname)
!
What is the point of this query anyway? To find the number of people with a given last name? To find the number of distinct people with that binding last name?
To find the number of distinct last names? (That last one is an entirely different query!)
Using distinct
sometimes requires writing the query by hand and using @Query
to best capture the information you seek, since you also may be needing a projection
to capture the result set.
Annotation-based Configuration
Annotation-based configuration has the advantage of not needing another configuration file to be edited, lowering maintenance effort. You pay for that benefit by the need to recompile your domain class for every new query declaration.
Example 58. Annotation-based named query configuration
@Entity
@NamedQuery(name = "User.findByEmailAddress",
query = "select u from User u where u.emailAddress = ?1")
public class User {
The examples use the <named-query />
element and @NamedQuery
annotation. The queries for these configuration elements have to be defined in the JPA query language. Of course, you can use <named-native-query />
or @NamedNativeQuery
too. These elements let you define the query in native SQL by losing the database platform independence.
XML Named Query Definition
To use XML configuration, add the necessary <named-query />
element to the orm.xml
JPA configuration file located in the META-INF
folder of your classpath. Automatic invocation of named queries is enabled by using some defined naming convention. For more details, see below.
Example 59. XML named query configuration
<named-query name="User.findByLastname">
<query>select u from User u where u.lastname = ?1</query>
</named-query>
Declaring Interfaces
To allow these named queries, specify the UserRepositoryWithRewriter
as follows:
Example 60. Query method declaration in UserRepository
public interface UserRepository extends JpaRepository<User, Long> {
List<User> findByLastname(String lastname);
User findByEmailAddress(String emailAddress);
Spring Data tries to resolve a call to these methods to a named query, starting with the simple name of the configured domain class, followed by the method name separated by a dot.
So the preceding example would use the named queries defined earlier instead of trying to create a query from the method name.
Using @Query
Using named queries to declare queries for entities is a valid approach and works fine for a small number of queries. As the queries themselves are tied to the Java method that runs them, you can actually bind them directly by using the Spring Data JPA @Query
annotation rather than annotating them to the domain class. This frees the domain class from persistence specific information and co-locates the query to the repository interface.
Queries annotated to the query method take precedence over queries defined using @NamedQuery
or named queries declared in orm.xml
.
The following example shows a query created with the @Query
annotation:
Example 61. Declare query at the query method using @Query
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.emailAddress = ?1")
User findByEmailAddress(String emailAddress);
Applying a QueryRewriter
Sometimes, no matter how many features you try to apply, it seems impossible to get Spring Data JPA to apply every thing
you’d like to a query before it is sent to the EntityManager
.
You have the ability to get your hands on the query, right before it’s sent to the EntityManager
and "rewrite" it. That is,
you can make any alterations at the last moment.
Example 62. Declare a QueryRewriter using @Query
public interface MyRepository extends JpaRepository<User, Long> {
@Query(value = "select original_user_alias.* from SD_USER original_user_alias",
nativeQuery = true,
queryRewriter = MyQueryRewriter.class)
List<User> findByNativeQuery(String param);
@Query(value = "select original_user_alias from User original_user_alias",
queryRewriter = MyQueryRewriter.class)
List<User> findByNonNativeQuery(String param);
This example shows both a native (pure SQL) rewriter as well as a JPQL query, both leveraging the same QueryRewriter
.
In this scenario, Spring Data JPA will look for a bean registered in the application context of the corresponding type.
You can write a query rewriter like this:
Example 63. Example QueryRewriter
public class MyQueryRewriter implements QueryRewriter {
@Override
public String rewrite(String query, Sort sort) {
return query.replaceAll("original_user_alias", "rewritten_user_alias");
You have to ensure your QueryRewriter
is registered in the application context, whether it’s by applying one of Spring Framework’s
@Component
-based annotations, or having it as part of a @Bean
method inside an @Configuration
class.
Another option is to have the repository itself implement the interface.
Example 64. Repository that provides the QueryRewriter
public interface MyRepository extends JpaRepository<User, Long>, QueryRewriter {
@Query(value = "select original_user_alias.* from SD_USER original_user_alias",
nativeQuery = true,
queryRewriter = MyRepository.class)
List<User> findByNativeQuery(String param);
@Query(value = "select original_user_alias from User original_user_alias",
queryRewriter = MyRepository.class)
List<User> findByNonNativeQuery(String param);
@Override
default String rewrite(String query, Sort sort) {
return query.replaceAll("original_user_alias", "rewritten_user_alias");
Depending on what you’re doing with your QueryRewriter
, it may be advisable to have more than one, each registered with the
application context.
In a CDI-based environment, Spring Data JPA will search the BeanManager
for instances of your implementation of
QueryRewriter
.
Using Advanced LIKE
Expressions
The query running mechanism for manually defined queries created with @Query
allows the definition of advanced LIKE
expressions inside the query definition, as shown in the following example:
Example 65. Advanced like
expressions in @Query
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.firstname like %?1")
List<User> findByFirstnameEndsWith(String firstname);
Native Queries
The @Query
annotation allows for running native queries by setting the nativeQuery
flag to true, as shown in the following example:
Example 66. Declare a native query at the query method using @Query
public interface UserRepository extends JpaRepository<User, Long> {
@Query(value = "SELECT * FROM USERS WHERE EMAIL_ADDRESS = ?1", nativeQuery = true)
User findByEmailAddress(String emailAddress);
Spring Data JPA does not currently support dynamic sorting for native queries, because it would have to manipulate the actual query declared, which it cannot do reliably for native SQL. You can, however, use native queries for pagination by specifying the count query yourself, as shown in the following example:
public interface UserRepository extends JpaRepository<User, Long> {
@Query(value = "SELECT * FROM USERS WHERE LASTNAME = ?1",
countQuery = "SELECT count(*) FROM USERS WHERE LASTNAME = ?1",
nativeQuery = true)
Page<User> findByLastname(String lastname, Pageable pageable);
Using Sort
Sorting can be done by either providing a PageRequest
or by using Sort
directly. The properties actually used within the Order
instances of Sort
need to match your domain model, which means they need to resolve to either a property or an alias used within the query. The JPQL defines this as a state field path expression.
However, using Sort
together with @Query
lets you sneak in non-path-checked Order
instances containing functions within the ORDER BY
clause. This is possible because the Order
is appended to the given query string. By default, Spring Data JPA rejects any Order
instance containing function calls, but you can use JpaSort.unsafe
to add potentially unsafe ordering.
The following example uses Sort
and JpaSort
, including an unsafe option on JpaSort
:
Example 68. Using Sort
and JpaSort
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.lastname like ?1%")
List<User> findByAndSort(String lastname, Sort sort);
@Query("select u.id, LENGTH(u.firstname) as fn_len from User u where u.lastname like ?1%")
List<Object[]> findByAsArrayAndSort(String lastname, Sort sort);
repo.findByAndSort("lannister", Sort.by("firstname")); (1)
repo.findByAndSort("stark", Sort.by("LENGTH(firstname)")); (2)
repo.findByAndSort("targaryen", JpaSort.unsafe("LENGTH(firstname)")); (3)
repo.findByAsArrayAndSort("bolton", Sort.by("fn_len")); (4)
Scrolling Large Query Results
When working with large data sets, scrolling can help to process those results efficiently without loading all results into memory.
You have multiple options to consume large query results:
Offset-based scrolling.
This is a lighter variant than paging because it does not require the total result count.
Keyset-baset scrolling.
This method avoids the shortcomings of offset-based result retrieval by leveraging database indexes.
Using Named Parameters
By default, Spring Data JPA uses position-based parameter binding, as described in all the preceding examples.
This makes query methods a little error-prone when refactoring regarding the parameter position.
To solve this issue, you can use @Param
annotation to give a method parameter a concrete name and bind the name in the query, as shown in the following example:
Example 69. Using named parameters
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.firstname = :firstname or u.lastname = :lastname")
User findByLastnameOrFirstname(@Param("lastname") String lastname,
@Param("firstname") String firstname);
Using SpEL Expressions
As of Spring Data JPA release 1.4, we support the usage of restricted SpEL template expressions in manually defined queries that are defined with @Query
. Upon the query being run, these expressions are evaluated against a predefined set of variables. Spring Data JPA supports a variable called entityName
. Its usage is select x from #{#entityName} x
. It inserts the entityName
of the domain type associated with the given repository. The entityName
is resolved as follows: If the domain type has set the name property on the @Entity
annotation, it is used. Otherwise, the simple class-name of the domain type is used.
The following example demonstrates one use case for the #{#entityName}
expression in a query string where you want to define a repository interface with a query method and a manually defined query:
Example 70. Using SpEL expressions in repository query methods - entityName
@Entity
public class User {
@GeneratedValue
Long id;
String lastname;
public interface UserRepository extends JpaRepository<User,Long> {
@Query("select u from #{#entityName} u where u.lastname = ?1")
List<User> findByLastname(String lastname);
Of course, you could have just used User
in the query declaration directly, but that would require you to change the query as well. The reference to #entityName
picks up potential future remappings of the User
class to a different entity name (for example, by using @Entity(name = "MyUser")
.
Another use case for the #{#entityName}
expression in a query string is if you want to define a generic repository interface with specialized repository interfaces for a concrete domain type. To not repeat the definition of custom query methods on the concrete interfaces, you can use the entity name expression in the query string of the @Query
annotation in the generic repository interface, as shown in the following example:
Example 71. Using SpEL expressions in repository query methods - entityName with inheritance
@MappedSuperclass
public abstract class AbstractMappedType {
String attribute
@Entity
public class ConcreteType extends AbstractMappedType { … }
@NoRepositoryBean
public interface MappedTypeRepository<T extends AbstractMappedType>
extends Repository<T, Long> {
@Query("select t from #{#entityName} t where t.attribute = ?1")
List<T> findAllByAttribute(String attribute);
public interface ConcreteRepository
extends MappedTypeRepository<ConcreteType> { … }
In the preceding example, the MappedTypeRepository
interface is the common parent interface for a few domain types extending AbstractMappedType
. It also defines the generic findAllByAttribute(…)
method, which can be used on instances of the specialized repository interfaces. If you now invoke findByAllAttribute(…)
on ConcreteRepository
, the query becomes select t from ConcreteType t where t.attribute = ?1
.
SpEL expressions to manipulate arguments may also be used to manipulate method arguments.
In these SpEL expressions the entity name is not available, but the arguments are.
They can be accessed by name or index as demonstrated in the following example.
Example 72. Using SpEL expressions in repository query methods - accessing arguments.
@Query("select u from User u where u.firstname = ?1 and u.firstname=?#{[0]} and u.emailAddress = ?#{principal.emailAddress}")
List<User> findByFirstnameAndCurrentUserWithCustomQuery(String firstname);
For like
-conditions one often wants to append %
to the beginning or the end of a String valued parameter.
This can be done by appending or prefixing a bind parameter marker or a SpEL expression with %
.
Again the following example demonstrates this.
Example 73. Using SpEL expressions in repository query methods - wildcard shortcut.
@Query("select u from User u where u.lastname like %:#{[0]}% and u.lastname like %:lastname%")
List<User> findByLastnameWithSpelExpression(@Param("lastname") String lastname);
When using like
-conditions with values that are coming from a not secure source the values should be sanitized so they can’t contain any wildcards and thereby allow attackers to select more data than they should be able to.
For this purpose the escape(String)
method is made available in the SpEL context.
It prefixes all instances of _
and %
in the first argument with the single character from the second argument.
In combination with the escape
clause of the like
expression available in JPQL and standard SQL this allows easy cleaning of bind parameters.
Example 74. Using SpEL expressions in repository query methods - sanitizing input values.
@Query("select u from User u where u.firstname like %?#{escape([0])}% escape ?#{escapeCharacter()}")
List<User> findContainingEscaped(String namePart);
Given this method declaration in a repository interface findContainingEscaped("Peter_")
will find Peter_Parker
but not Peter Parker
.
The escape character used can be configured by setting the escapeCharacter
of the @EnableJpaRepositories
annotation.
Note that the method escape(String)
available in the SpEL context will only escape the SQL and JPQL standard wildcards _
and %
.
If the underlying database or the JPA implementation supports additional wildcards these will not get escaped.
Other Methods
Spring Data JPA offers many ways to build queries.
But sometimes, your query may simply be too complicated for the techniques offered.
In that situation, consider:
Talk directly to the EntityManager
(writing pure HQL/JPQL/EQL/native SQL or using the Criteria API)
Leverage Spring Framework’s JdbcTemplate
(native SQL)
Use another 3rd-party database toolkit.
Modifying Queries
All the previous sections describe how to declare queries to access a given entity or collection of entities.
You can add custom modifying behavior by using the custom method facilities described in “Custom Implementations for Spring Data Repositories”.
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with @Modifying
, as shown in the following example:
Example 75. Declaring manipulating queries
@Modifying
@Query("update User u set u.firstname = ?1 where u.lastname = ?2")
int setFixedFirstnameFor(String firstname, String lastname);
Doing so triggers the query annotated to the method as an updating query instead of a selecting one. As the EntityManager
might contain outdated entities after the execution of the modifying query, we do not automatically clear it (see the JavaDoc of EntityManager.clear()
for details), since this effectively drops all non-flushed changes still pending in the EntityManager
.
If you wish the EntityManager
to be cleared automatically, you can set the @Modifying
annotation’s clearAutomatically
attribute to true
.
The @Modifying
annotation is only relevant in combination with the @Query
annotation.
Derived query methods or custom methods do not require this annotation.
Derived Delete Queries
Spring Data JPA also supports derived delete queries that let you avoid having to declare the JPQL query explicitly, as shown in the following example:
Example 76. Using a derived delete query
interface UserRepository extends Repository<User, Long> {
void deleteByRoleId(long roleId);
@Modifying
@Query("delete from User u where u.role.id = ?1")
void deleteInBulkByRoleId(long roleId);
Although the deleteByRoleId(…)
method looks like it basically produces the same result as the deleteInBulkByRoleId(…)
, there is an important difference between the two method declarations in terms of the way they are run.
As the name suggests, the latter method issues a single JPQL query (the one defined in the annotation) against the database.
This means even currently loaded instances of User
do not see lifecycle callbacks invoked.
To make sure lifecycle queries are actually invoked, an invocation of deleteByRoleId(…)
runs a query and then deletes the returned instances one by one, so that the persistence provider can actually invoke @PreRemove
callbacks on those entities.
In fact, a derived delete query is a shortcut for running the query and then calling CrudRepository.delete(Iterable<User> users)
on the result and keeping behavior in sync with the implementations of other delete(…)
methods in CrudRepository
.
Applying Query Hints
To apply JPA query hints to the queries declared in your repository interface, you can use the @QueryHints
annotation. It takes an array of JPA @QueryHint
annotations plus a boolean flag to potentially disable the hints applied to the additional count query triggered when applying pagination, as shown in the following example:
Example 77. Using QueryHints with a repository method
public interface UserRepository extends Repository<User, Long> {
@QueryHints(value = { @QueryHint(name = "name", value = "value")},
forCounting = false)
Page<User> findByLastname(String lastname, Pageable pageable);
The preceding declaration would apply the configured @QueryHint
for that actually query but omit applying it to the count query triggered to calculate the total number of pages.
Adding Comments to Queries
Sometimes, you need to debug a query based upon database performance.
The query your database administrator shows you may look VERY different than what you wrote using @Query
, or it may look
nothing like what you presume Spring Data JPA has generated regarding a custom finder or if you used query by example.
To make this process easier, you can insert custom comments into almost any JPA operation, whether its a query or other operation
by applying the @Meta
annotation.
Example 78. Apply @Meta
annotation to repository operations
public interface RoleRepository extends JpaRepository<Role, Integer> {
@Meta(comment = "find roles by name")
List<Role> findByName(String name);
@Override
@Meta(comment = "find roles using QBE")
<S extends Role> List<S> findAll(Example<S> example);
@Meta(comment = "count roles for a given name")
long countByName(String name);
@Override
@Meta(comment = "exists based on QBE")
<S extends Role> boolean exists(Example<S> example);
This sample repository has a mixture of custom finders as well as overriding the inherited operations from JpaRepository
.
Either way, the @Meta
annotation lets you add a comment
that will be inserted into queries before they are sent to the database.
It’s also important to note that this feature isn’t confined solely to queries. It extends to the count
and exists
operations.
And while not shown, it also extends to certain delete
operations.
Neither JPQL logging nor SQL logging is a standard in JPA, so each provider requires custom configuration, as shown the sections below.
Activating Hibernate comments
To activate query comments in Hibernate, you must set hibernate.use_sql_comments
to true
.
If you are using Java-based configuration settings, this can be done like this:
Example 79. Java-based JPA configuration
@Bean
public Properties jpaProperties() {
Properties properties = new Properties();
properties.setProperty("hibernate.use_sql_comments", "true");
return properties;
Finally, if you are using Spring Boot, then you can set it up inside your application.properties
file:
Example 81. Spring Boot property-based configuration
spring.jpa.properties.hibernate.use_sql_comments=true
Activating EclipseLink comments
To activate query comments in EclipseLink, you must set eclipselink.logging.level.sql
to FINE
.
If you are using Java-based configuration settings, this can be done like this:
Example 82. Java-based JPA configuration
@Bean
public Properties jpaProperties() {
Properties properties = new Properties();
properties.setProperty("eclipselink.logging.level.sql", "FINE");
return properties;
Finally, if you are using Spring Boot, then you can set it up inside your application.properties
file:
Example 84. Spring Boot property-based configuration
spring.jpa.properties.eclipselink.logging.level.sql=FINE
Configuring Fetch- and LoadGraphs
The JPA 2.1 specification introduced support for specifying Fetch- and LoadGraphs that we also support with the @EntityGraph
annotation, which lets you reference a @NamedEntityGraph
definition. You can use that annotation on an entity to configure the fetch plan of the resulting query. The type (Fetch
or Load
) of the fetching can be configured by using the type
attribute on the @EntityGraph
annotation. See the JPA 2.1 Spec 3.7.4 for further reference.
The following example shows how to define a named entity graph on an entity:
Example 85. Defining a named entity graph on an entity.
@Entity
@NamedEntityGraph(name = "GroupInfo.detail",
attributeNodes = @NamedAttributeNode("members"))
public class GroupInfo {
// default fetch mode is lazy.
@ManyToMany
List<GroupMember> members = new ArrayList<GroupMember>();
The following example shows how to reference a named entity graph on a repository query method:
Example 86. Referencing a named entity graph definition on a repository query method.
public interface GroupRepository extends CrudRepository<GroupInfo, String> {
@EntityGraph(value = "GroupInfo.detail", type = EntityGraphType.LOAD)
GroupInfo getByGroupName(String name);
It is also possible to define ad hoc entity graphs by using @EntityGraph
. The provided attributePaths
are translated into the according EntityGraph
without needing to explicitly add @NamedEntityGraph
to your domain types, as shown in the following example:
Example 87. Using AD-HOC entity graph definition on an repository query method.
public interface GroupRepository extends CrudRepository<GroupInfo, String> {
@EntityGraph(attributePaths = { "members" })
GroupInfo getByGroupName(String name);
Projections
Spring Data query methods usually return one or multiple instances of the aggregate root managed by the repository.
However, it might sometimes be desirable to create projections based on certain attributes of those types.
Spring Data allows modeling dedicated return types, to more selectively retrieve partial views of the managed aggregates.
Imagine a repository and aggregate root type such as the following example:
Example 88. A sample aggregate and repository
class Person {
@Id UUID id;
String firstname, lastname;
Address address;
static class Address {
String zipCode, city, street;
interface PersonRepository extends Repository<Person, UUID> {
Collection<Person> findByLastname(String lastname);
Now imagine that we want to retrieve the person’s name attributes only.
What means does Spring Data offer to achieve this? The rest of this chapter answers that question.
Interface-based Projections
The easiest way to limit the result of the queries to only the name attributes is by declaring an interface that exposes accessor methods for the properties to be read, as shown in the following example:
Example 89. A projection interface to retrieve a subset of attributes
interface NamesOnly {
String getFirstname();
String getLastname();
The important bit here is that the properties defined here exactly match properties in the aggregate root.
Doing so lets a query method be added as follows:
Example 90. A repository using an interface based projection with a query method
interface PersonRepository extends Repository<Person, UUID> {
Collection<NamesOnly> findByLastname(String lastname);
Declaring a method in your Repository
that overrides a base method (e.g. declared in CrudRepository
, a store-specific repository interface, or the Simple…Repository
) results in a call to the base method regardless of the declared return type. Make sure to use a compatible return type as base methods cannot be used for projections. Some store modules support @Query
annotations to turn an overridden base method into a query method that then can be used to return projections.
Projections can be used recursively. If you want to include some of the Address
information as well, create a projection interface for that and return that interface from the declaration of getAddress()
, as shown in the following example:
Example 91. A projection interface to retrieve a subset of attributes
interface PersonSummary {
String getFirstname();
String getLastname();
AddressSummary getAddress();
interface AddressSummary {
String getCity();
On method invocation, the address
property of the target instance is obtained and wrapped into a projecting proxy in turn.
Closed Projections
A projection interface whose accessor methods all match properties of the target aggregate is considered to be a closed projection. The following example (which we used earlier in this chapter, too) is a closed projection:
Example 92. A closed projection
interface NamesOnly {
String getFirstname();
String getLastname();
If you use a closed projection, Spring Data can optimize the query execution, because we know about all the attributes that are needed to back the projection proxy.
For more details on that, see the module-specific part of the reference documentation.
Open Projections
Accessor methods in projection interfaces can also be used to compute new values by using the @Value
annotation, as shown in the following example:
Example 93. An Open Projection
interface NamesOnly {
@Value("#{target.firstname + ' ' + target.lastname}")
String getFullName();
The aggregate root backing the projection is available in the target
variable.
A projection interface using @Value
is an open projection.
Spring Data cannot apply query execution optimizations in this case, because the SpEL expression could use any attribute of the aggregate root.
The expressions used in @Value
should not be too complex — you want to avoid programming in String
variables.
For very simple expressions, one option might be to resort to default methods (introduced in Java 8), as shown in the following example:
Example 94. A projection interface using a default method for custom logic
interface NamesOnly {
String getFirstname();
String getLastname();
default String getFullName() {
return getFirstname().concat(" ").concat(getLastname());
This approach requires you to be able to implement logic purely based on the other accessor methods exposed on the projection interface.
A second, more flexible, option is to implement the custom logic in a Spring bean and then invoke that from the SpEL expression, as shown in the following example:
Example 95. Sample Person object
@Component
class MyBean {
String getFullName(Person person) {
interface NamesOnly {
@Value("#{@myBean.getFullName(target)}")
String getFullName();
Notice how the SpEL expression refers to myBean
and invokes the getFullName(…)
method and forwards the projection target as a method parameter.
Methods backed by SpEL expression evaluation can also use method parameters, which can then be referred to from the expression.
The method parameters are available through an Object
array named args
. The following example shows how to get a method parameter from the args
array:
Example 96. Sample Person object
interface NamesOnly {
@Value("#{args[0] + ' ' + target.firstname + '!'}")
String getSalutation(String prefix);
If the underlying projection value is not null
, then values are returned using the present-representation of the wrapper type.
In case the backing value is null
, then the getter method returns the empty representation of the used wrapper type.
Class-based Projections (DTOs)
Another way of defining projections is by using value type DTOs (Data Transfer Objects) that hold properties for the fields that are supposed to be retrieved.
These DTO types can be used in exactly the same way projection interfaces are used, except that no proxying happens and no nested projections can be applied.
If the store optimizes the query execution by limiting the fields to be loaded, the fields to be loaded are determined from the parameter names of the constructor that is exposed.
The following example shows a projecting DTO:
Example 98. A projecting DTO
record NamesOnly(String firstname, String lastname) {
Java Records are ideal to define DTO types since they adhere to value semantics:
All fields are private final
and equals(…)
/hashCode()
/toString()
methods are created automatically.
Alternatively, you can use any class that defines the properties you want to project.
Class-based projections with JPQL is limited to constructor expressions in your JPQL expression, e.g. SELECT new com.example.NamesOnly(u.firstname, u.lastname) from User u
. (Note the usage of a FQDN for the DTO type!) This JPQL expression can be used in @Query
annotations as well where you define any named queries. And it’s important to point out that class-based projections do not work with native queries AT ALL. As a workaround you may use named queries with ResultSetMapping
or the Hibernate specific
ResultTransformer
Dynamic Projections
So far, we have used the projection type as the return type or element type of a collection.
However, you might want to select the type to be used at invocation time (which makes it dynamic).
To apply dynamic projections, use a query method such as the one shown in the following example:
Example 99. A repository using a dynamic projection parameter
interface PersonRepository extends Repository<Person, UUID> {
<T> Collection<T> findByLastname(String lastname, Class<T> type);
This way, the method can be used to obtain the aggregates as is or with a projection applied, as shown in the following example:
Example 100. Using a repository with dynamic projections
void someMethod(PersonRepository people) {
Collection<Person> aggregates =
people.findByLastname("Matthews", Person.class);
Collection<NamesOnly> aggregates =
people.findByLastname("Matthews", NamesOnly.class);
Query parameters of type Class
are inspected whether they qualify as dynamic projection parameter.
If the actual return type of the query equals the generic parameter type of the Class
parameter, then the matching Class
parameter is not available for usage within the query or SpEL expressions.
If you want to use a Class
parameter as query argument then make sure to use a different generic parameter, for example Class<?>
.
5.1.4. Stored Procedures
The JPA 2.1 specification introduced support for calling stored procedures by using the JPA criteria query API.
We Introduced the @Procedure
annotation for declaring stored procedure metadata on a repository method.
The examples to follow use the following stored procedure:
Example 101. The definition of the plus1inout
procedure in HSQL DB.
DROP procedure IF EXISTS plus1inout
CREATE procedure plus1inout (IN arg int, OUT res int)
BEGIN ATOMIC
set res = arg + 1;
Metadata for stored procedures can be configured by using the NamedStoredProcedureQuery
annotation on an entity type.
Example 102. StoredProcedure metadata definitions on an entity.
@Entity
@NamedStoredProcedureQuery(name = "User.plus1", procedureName = "plus1inout", parameters = {
@StoredProcedureParameter(mode = ParameterMode.IN, name = "arg", type = Integer.class),
@StoredProcedureParameter(mode = ParameterMode.OUT, name = "res", type = Integer.class) })
public class User {}
Note that @NamedStoredProcedureQuery
has two different names for the stored procedure.
name
is the name JPA uses. procedureName
is the name the stored procedure has in the database.
You can reference stored procedures from a repository method in multiple ways.
The stored procedure to be called can either be defined directly by using the value
or procedureName
attribute of the @Procedure
annotation.
This refers directly to the stored procedure in the database and ignores any configuration via @NamedStoredProcedureQuery
.
Alternatively you may specify the @NamedStoredProcedureQuery.name
attribute as the @Procedure.name
attribute.
If neither value
, procedureName
nor name
is configured, the name of the repository method is used as the name
attribute.
The following example shows how to reference an explicitly mapped procedure:
Example 103. Referencing explicitly mapped procedure with name "plus1inout" in database.
@Procedure("plus1inout")
Integer explicitlyNamedPlus1inout(Integer arg);
The following example is equivalent to the previous one but uses the procedureName
alias:
Example 104. Referencing implicitly mapped procedure with name "plus1inout" in database via procedureName
alias.
@Procedure(procedureName = "plus1inout")
Integer callPlus1InOut(Integer arg);
The following is again equivalent to the previous two but using the method name instead of an explicite annotation attribute.
Example 105. Referencing implicitly mapped named stored procedure "User.plus1" in EntityManager
by using the method name.
@Procedure
Integer plus1inout(@Param("arg") Integer arg);
The following example shows how to reference a stored procedure by referencing the @NamedStoredProcedureQuery.name
attribute.
Example 106. Referencing explicitly mapped named stored procedure "User.plus1IO" in EntityManager
.
@Procedure(name = "User.plus1IO")
Integer entityAnnotatedCustomNamedProcedurePlus1IO(@Param("arg") Integer arg);
If the stored procedure getting called has a single out parameter that parameter may be returned as the return value of the method.
If there are multiple out parameters specified in a @NamedStoredProcedureQuery
annotation those can be returned as a Map
with the key being the parameter name given in the @NamedStoredProcedureQuery
annotation.
5.1.5. Specifications
JPA 2 introduces a criteria API that you can use to build queries programmatically. By writing a criteria
, you define the where clause of a query for a domain class. Taking another step back, these criteria can be regarded as a predicate over the entity that is described by the JPA criteria API constraints.
Spring Data JPA takes the concept of a specification from Eric Evans' book, “Domain Driven Design”, following the same semantics and providing an API to define such specifications with the JPA criteria API. To support specifications, you can extend your repository interface with the JpaSpecificationExecutor
interface, as follows:
public interface CustomerRepository extends CrudRepository<Customer, Long>, JpaSpecificationExecutor<Customer> {
public interface Specification<T> {
Predicate toPredicate(Root<T> root, CriteriaQuery<?> query,
CriteriaBuilder builder);
Specifications can easily be used to build an extensible set of predicates on top of an entity that then can be combined and used with JpaRepository
without the need to declare a query (method) for every needed combination, as shown in the following example:
Example 107. Specifications for a Customer
public class CustomerSpecs {
public static Specification<Customer> isLongTermCustomer() {
return (root, query, builder) -> {
LocalDate date = LocalDate.now().minusYears(2);
return builder.lessThan(root.get(Customer_.createdAt), date);
public static Specification<Customer> hasSalesOfMoreThan(MonetaryAmount value) {
return (root, query, builder) -> {
// build query here
The Customer_
type is a metamodel type generated using the JPA Metamodel generator (see the Hibernate implementation’s documentation for an example).
So the expression, Customer_.createdAt
, assumes the Customer
has a createdAt
attribute of type Date
.
Besides that, we have expressed some criteria on a business requirement abstraction level and created executable Specifications
.
So a client might use a Specification
as follows:
Example 108. Using a simple Specification
List<Customer> customers = customerRepository.findAll(isLongTermCustomer());
Why not create a query for this kind of data access? Using a single Specification
does not gain a lot of benefit over a plain query declaration. The power of specifications really shines when you combine them to create new Specification
objects. You can achieve this through the default methods of Specification
we provide to build expressions similar to the following:
Example 109. Combined Specifications
MonetaryAmount amount = new MonetaryAmount(200.0, Currencies.DOLLAR);
List<Customer> customers = customerRepository.findAll(
isLongTermCustomer().or(hasSalesOfMoreThan(amount)));
And with JPA 2.1, the CriteriaBuilder
API introduced CriteriaDelete
. This is provided through JpaSpecificationExecutor’s `delete(Specification)
API.
Example 110. Using a Specification
to delete entries.
Specification<User> ageLessThan18 = (root, query, cb) -> cb.lessThan(root.get("age").as(Integer.class), 18)
userRepository.delete(ageLessThan18);
The Specification
builds up a criteria where the age
field (cast as an integer) is less than 18
.
Passed on to the userRepository
, it will use JPA’s CriteriaDelete
feature to generate the right DELETE
operation.
It then returns the number of entities deleted.
Introduction
This chapter provides an introduction to Query by Example and explains how to use it.
Query by Example (QBE) is a user-friendly querying technique with a simple interface.
It allows dynamic query creation and does not require you to write queries that contain field names.
In fact, Query by Example does not require you to write queries by using store-specific query languages at all.
Usage
The Query by Example API consists of four parts:
ExampleMatcher
: The ExampleMatcher
carries details on how to match particular fields.
It can be reused across multiple Examples.
Example
: An Example
consists of the probe and the ExampleMatcher
.
It is used to create the query.
FetchableFluentQuery
: A FetchableFluentQuery
offers a fluent API, that allows further customization of a query derived from an Example
.
Using the fluent API lets you to specify ordering projection and result processing for your query.
Frequent refactoring of the domain objects without worrying about breaking existing queries.
Working independently from the underlying data store API.
No support for nested or grouped property constraints, such as firstname = ?0 or (firstname = ?1 and lastname = ?2)
.
Only supports starts/contains/ends/regex matching for strings and exact matching for other property types.
Before getting started with Query by Example, you need to have a domain object.
To get started, create an interface for your repository, as shown in the following example:
Example 111. Sample Person object
public class Person {
private String id;
private String firstname;
private String lastname;
private Address address;
// … getters and setters omitted
Examples can be built by either using the of
factory method or by using ExampleMatcher
. Example
is immutable.
The following listing shows a simple Example:
Example 112. Simple Example
Person person = new Person(); (1)
person.setFirstname("Dave"); (2)
Example<Person> example = Example.of(person); (3)
You can run the example queries by using repositories.
To do so, let your repository interface extend QueryByExampleExecutor<T>
.
The following listing shows an excerpt from the QueryByExampleExecutor
interface:
Example 113. The QueryByExampleExecutor
public interface QueryByExampleExecutor<T> {
<S extends T> S findOne(Example<S> example);
<S extends T> Iterable<S> findAll(Example<S> example);
// … more functionality omitted.
Examples are not limited to default settings.
You can specify your own defaults for string matching, null handling, and property-specific settings by using the ExampleMatcher
, as shown in the following example:
Example 114. Example matcher with customized matching
Person person = new Person(); (1)
person.setFirstname("Dave"); (2)
ExampleMatcher matcher = ExampleMatcher.matching() (3)
.withIgnorePaths("lastname") (4)
.withIncludeNullValues() (5)
.withStringMatcher(StringMatcher.ENDING); (6)
Example<Person> example = Example.of(person, matcher); (7)
Create an ExampleMatcher
to expect all values to match.
It is usable at this stage even without further configuration.
Construct a new ExampleMatcher
to ignore the lastname
property path.
Construct a new ExampleMatcher
to ignore the lastname
property path and to include null values.
Construct a new ExampleMatcher
to ignore the lastname
property path, to include null values, and to perform suffix string matching.
Create a new Example
based on the domain object and the configured ExampleMatcher
.
By default, the ExampleMatcher
expects all values set on the probe to match.
If you want to get results matching any of the predicates defined implicitly, use ExampleMatcher.matchingAny()
.
You can specify behavior for individual properties (such as "firstname" and "lastname" or, for nested properties, "address.city").
You can tune it with matching options and case sensitivity, as shown in the following example:
Example 115. Configuring matcher options
ExampleMatcher matcher = ExampleMatcher.matching()
.withMatcher("firstname", endsWith())
.withMatcher("lastname", startsWith().ignoreCase());
Another way to configure matcher options is to use lambdas (introduced in Java 8).
This approach creates a callback that asks the implementor to modify the matcher.
You need not return the matcher, because configuration options are held within the matcher instance.
The following example shows a matcher that uses lambdas:
Example 116. Configuring matcher options with lambdas
ExampleMatcher matcher = ExampleMatcher.matching()
.withMatcher("firstname", match -> match.endsWith())
.withMatcher("firstname", match -> match.startsWith());
Queries created by Example
use a merged view of the configuration.
Default matching settings can be set at the ExampleMatcher
level, while individual settings can be applied to particular property paths.
Settings that are set on ExampleMatcher
are inherited by property path settings unless they are defined explicitly.
Settings on a property patch have higher precedence than default settings.
The following table describes the scope of the various ExampleMatcher
settings:
Table 5. Scope of ExampleMatcher
settings
Fluent API
QueryByExampleExecutor
offers one more method, which we did not mention so far: <S extends T, R> R findBy(Example<S> example, Function<FluentQuery.FetchableFluentQuery<S>, R> queryFunction)
.
As with other methods, it executes a query derived from an Example
.
However, with the second argument, you can control aspects of that execution that you cannot dynamically control otherwise.
You do so by invoking the various methods of the FetchableFluentQuery
in the second argument.
sortBy
lets you specify an ordering for your result.
as
lets you specify the type to which you want the result to be transformed.
project
limits the queried attributes.
first
, firstValue
, one
, oneValue
, all
, page
, stream
, count
, and exists
define what kind of result you get and how the query behaves when more than the expected number of results are available.
Example 117. Use the fluent API to get the last of potentially many results, ordered by lastname.
Optional<Person> match = repository.findBy(example,
q -> q
.sortBy(Sort.by("lastname").descending())
.first()
Running an Example
In Spring Data JPA, you can use Query by Example with Repositories, as shown in the following example:
Example 118. Query by Example using a Repository
public interface PersonRepository extends JpaRepository<Person, String> { … }
public class PersonService {
@Autowired PersonRepository personRepository;
public List<Person> findPeople(Person probe) {
return personRepository.findAll(Example.of(probe));
The property specifier accepts property names (such as firstname
and lastname
). You can navigate by chaining properties together with dots (address.city
). You can also tune it with matching options and case sensitivity.
The following table shows the various StringMatcher
options that you can use and the result of using them on a field named firstname
:
Table 6. StringMatcher
options
5.1.7. Transactionality
By default, methods inherited from CrudRepository
inherit the transactional configuration from SimpleJpaRepository
.
For read operations, the transaction configuration readOnly
flag is set to true
.
All others are configured with a plain @Transactional
so that default transaction configuration applies.
Repository methods that are backed by transactional repository fragments inherit the transactional attributes from the actual fragment method.
If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows:
Example 119. Custom transaction configuration for CRUD
public interface UserRepository extends CrudRepository<User, Long> {
@Override
@Transactional(timeout = 10)
public List<User> findAll();
// Further query method declarations
Another way to alter transactional behaviour is to use a facade or service implementation that (typically) covers more than one repository. Its purpose is to define transactional boundaries for non-CRUD operations. The following example shows how to use such a facade for more than one repository:
Example 120. Using a facade to define transactions for multiple repository calls
@Service
public class UserManagementImpl implements UserManagement {
private final UserRepository userRepository;
private final RoleRepository roleRepository;
public UserManagementImpl(UserRepository userRepository,
RoleRepository roleRepository) {
this.userRepository = userRepository;
this.roleRepository = roleRepository;
@Transactional
public void addRoleToAllUsers(String roleName) {
Role role = roleRepository.findByName(roleName);
for (User user : userRepository.findAll()) {
user.addRole(role);
userRepository.save(user);
This example causes call to addRoleToAllUsers(…)
to run inside a transaction (participating in an existing one or creating a new one if none are already running). The transaction configuration at the repositories is then neglected, as the outer transaction configuration determines the actual one used. Note that you must activate <tx:annotation-driven />
or use @EnableTransactionManagement
explicitly to get annotation-based configuration of facades to work.
This example assumes you use component scanning.
Note that the call to save
is not strictly necessary from a JPA point of view, but should still be there in order to stay consistent to the repository abstraction offered by Spring Data.
Transactional query methods
Declared query methods (including default methods) do not get any transaction configuration applied by default.
To run those methods transactionally, use @Transactional
at the repository interface you define, as shown in the following example:
Example 121. Using @Transactional at query methods
@Transactional(readOnly = true)
interface UserRepository extends JpaRepository<User, Long> {
List<User> findByLastname(String lastname);
@Modifying
@Transactional
@Query("delete from User u where u.active = false")
void deleteInactiveUsers();
You can use transactions for read-only queries and mark them as such by setting the readOnly
flag. Doing so does not, however, act as a check that you do not trigger a manipulating query (although some databases reject INSERT
and UPDATE
statements inside a read-only transaction). The readOnly
flag is instead propagated as a hint to the underlying JDBC driver for performance optimizations. Furthermore, Spring performs some optimizations on the underlying JPA provider. For example, when used with Hibernate, the flush mode is set to NEVER
when you configure a transaction as readOnly
, which causes Hibernate to skip dirty checks (a noticeable improvement on large object trees).
5.1.8. Locking
To specify the lock mode to be used, you can use the @Lock
annotation on query methods, as shown in the following example:
Example 122. Defining lock metadata on query methods
interface UserRepository extends Repository<User, Long> {
// Plain query method
@Lock(LockModeType.READ)
List<User> findByLastname(String lastname);
This method declaration causes the query being triggered to be equipped with a LockModeType
of READ
. You can also define locking for CRUD methods by redeclaring them in your repository interface and adding the @Lock
annotation, as shown in the following example:
Example 123. Defining lock metadata on CRUD methods
interface UserRepository extends Repository<User, Long> {
// Redeclaration of a CRUD method
@Lock(LockModeType.READ)
List<User> findAll();
Basics
Spring Data provides sophisticated support to transparently keep track of who created or changed an entity and when the change happened.To benefit from that functionality, you have to equip your entity classes with auditing metadata that can be defined either using annotations or by implementing an interface.
Additionally, auditing has to be enabled either through Annotation configuration or XML configuration to register the required infrastructure components.
Please refer to the store-specific section for configuration samples.
Annotation-based Auditing Metadata
We provide @CreatedBy
and @LastModifiedBy
to capture the user who created or modified the entity as well as @CreatedDate
and @LastModifiedDate
to capture when the change happened.
Example 124. An audited entity
class Customer {
@CreatedBy
private User user;
@CreatedDate
private Instant createdDate;
// … further properties omitted
As you can see, the annotations can be applied selectively, depending on which information you want to capture.
The annotations, indicating to capture when changes are made, can be used on properties of type JDK8 date and time types, long
, Long
, and legacy Java Date
and Calendar
.
Auditing metadata does not necessarily need to live in the root level entity but can be added to an embedded one (depending on the actual store in use), as shown in the snippet below.
Example 125. Audit metadata in embedded entity
class Customer {
private AuditMetadata auditingMetadata;
// … further properties omitted
class AuditMetadata {
@CreatedBy
private User user;
@CreatedDate
private Instant createdDate;
AuditorAware
In case you use either @CreatedBy
or @LastModifiedBy
, the auditing infrastructure somehow needs to become aware of the current principal. To do so, we provide an AuditorAware<T>
SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type T
defines what type the properties annotated with @CreatedBy
or @LastModifiedBy
have to be.
The following example shows an implementation of the interface that uses Spring Security’s Authentication
object:
Example 126. Implementation of AuditorAware
based on Spring Security
class SpringSecurityAuditorAware implements AuditorAware<User> {
@Override
public Optional<User> getCurrentAuditor() {
return Optional.ofNullable(SecurityContextHolder.getContext())
.map(SecurityContext::getAuthentication)
.filter(Authentication::isAuthenticated)
.map(Authentication::getPrincipal)
.map(User.class::cast);
The implementation accesses the Authentication
object provided by Spring Security and looks up the custom UserDetails
instance that you have created in your UserDetailsService
implementation. We assume here that you are exposing the domain user through the UserDetails
implementation but that, based on the Authentication
found, you could also look it up from anywhere.
ReactiveAuditorAware
When using reactive infrastructure you might want to make use of contextual information to provide @CreatedBy
or @LastModifiedBy
information.
We provide an ReactiveAuditorAware<T>
SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type T
defines what type the properties annotated with @CreatedBy
or @LastModifiedBy
have to be.
The following example shows an implementation of the interface that uses reactive Spring Security’s Authentication
object:
Example 127. Implementation of ReactiveAuditorAware
based on Spring Security
class SpringSecurityAuditorAware implements ReactiveAuditorAware<User> {
@Override
public Mono<User> getCurrentAuditor() {
return ReactiveSecurityContextHolder.getContext()
.map(SecurityContext::getAuthentication)
.filter(Authentication::isAuthenticated)
.map(Authentication::getPrincipal)
.map(User.class::cast);
The implementation accesses the Authentication
object provided by Spring Security and looks up the custom UserDetails
instance that you have created in your UserDetailsService
implementation. We assume here that you are exposing the domain user through the UserDetails
implementation but that, based on the Authentication
found, you could also look it up from anywhere.
There is also a convenience base class, AbstractAuditable
, which you can extend to avoid the need to manually implement the interface methods. Doing so increases the coupling of your domain classes to Spring Data, which might be something you want to avoid. Usually, the annotation-based way of defining auditing metadata is preferred as it is less invasive and more flexible.
General Auditing Configuration
Spring Data JPA ships with an entity listener that can be used to trigger the capturing of auditing information. First, you must register the AuditingEntityListener
to be used for all entities in your persistence contexts inside your orm.xml
file, as shown in the following example:
Example 128. Auditing configuration orm.xml
<persistence-unit-metadata>
<persistence-unit-defaults>
<entity-listeners>
<entity-listener class="….data.jpa.domain.support.AuditingEntityListener" />
</entity-listeners>
</persistence-unit-defaults>
</persistence-unit-metadata>
With orm.xml
suitably modified and spring-aspects.jar
on the classpath, activating auditing functionality is a matter of adding the Spring Data JPA auditing
namespace element to your configuration, as follows:
Example 129. Activating auditing using XML configuration
<jpa:auditing auditor-aware-ref="yourAuditorAwareBean" />
As of Spring Data JPA 1.5, you can enable auditing by annotating a configuration class with the @EnableJpaAuditing
annotation. You must still modify the orm.xml
file and have spring-aspects.jar
on the classpath. The following example shows how to use the @EnableJpaAuditing
annotation:
Example 130. Activating auditing with Java configuration
@Configuration
@EnableJpaAuditing
class Config {
@Bean
public AuditorAware<AuditableUser> auditorProvider() {
return new AuditorAwareImpl();
If you expose a bean of type AuditorAware
to the ApplicationContext
, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types. If you have multiple implementations registered in the ApplicationContext
, you can select the one to be used by explicitly setting the auditorAwareRef
attribute of @EnableJpaAuditing
.
5.2.1. Using JpaContext
in Custom Implementations
When working with multiple EntityManager
instances and custom repository implementations, you need to wire the correct EntityManager
into the repository implementation class. You can do so by explicitly naming the EntityManager
in the @PersistenceContext
annotation or, if the EntityManager
is @Autowired
, by using @Qualifier
.
As of Spring Data JPA 1.9, Spring Data JPA includes a class called JpaContext
that lets you obtain the EntityManager
by managed domain class, assuming it is managed by only one of the EntityManager
instances in the application. The following example shows how to use JpaContext
in a custom repository:
Example 131. Using JpaContext
in a custom repository implementation
class UserRepositoryImpl implements UserRepositoryCustom {
private final EntityManager em;
@Autowired
public UserRepositoryImpl(JpaContext context) {
this.em = context.getEntityManagerByManagedType(User.class);
5.2.2. Merging persistence units
Spring supports having multiple persistence units. Sometimes, however, you might want to modularize your application but still make sure that all these modules run inside a single persistence unit. To enable that behavior, Spring Data JPA offers a PersistenceUnitManager
implementation that automatically merges persistence units based on their name, as shown in the following example:
Example 132. Using MergingPersistenceUnitmanager
<bean class="….LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitManager">
<bean class="….MergingPersistenceUnitManager" />
</property>
</bean>
Classpath Scanning for @Entity Classes and JPA Mapping Files
A plain JPA setup requires all annotation-mapped entity classes to be listed in orm.xml
. The same applies to XML mapping files. Spring Data JPA provides a ClasspathScanningPersistenceUnitPostProcessor
that gets a base package configured and optionally takes a mapping filename pattern. It then scans the given package for classes annotated with @Entity
or @MappedSuperclass
, loads the configuration files that match the filename pattern, and hands them to the JPA configuration. The post-processor must be configured as follows:
Example 133. Using ClasspathScanningPersistenceUnitPostProcessor
<bean class="….LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitPostProcessors">
<bean class="org.springframework.data.jpa.support.ClasspathScanningPersistenceUnitPostProcessor">
<constructor-arg value="com.acme.domain" />
<property name="mappingFileNamePattern" value="**/*Mapping.xml" />
</bean>
</list>
</property>
</bean>
5.2.3. CDI Integration
Instances of the repository interfaces are usually created by a container, for which Spring is the most natural choice when working with Spring Data. Spring offers sophisticated support for creating bean instances, as documented in Creating Repository Instances. As of version 1.1.0, Spring Data JPA ships with a custom CDI extension that allows using the repository abstraction in CDI environments. The extension is part of the JAR. To activate it, include the Spring Data JPA JAR on your classpath.
You can now set up the infrastructure by implementing a CDI Producer for the EntityManagerFactory
and EntityManager
, as shown in the following example:
class EntityManagerFactoryProducer {
@Produces
@ApplicationScoped
public EntityManagerFactory createEntityManagerFactory() {
return Persistence.createEntityManagerFactory("my-persistence-unit");
public void close(@Disposes EntityManagerFactory entityManagerFactory) {
entityManagerFactory.close();
@Produces
@RequestScoped
public EntityManager createEntityManager(EntityManagerFactory entityManagerFactory) {
return entityManagerFactory.createEntityManager();
public void close(@Disposes EntityManager entityManager) {
entityManager.close();
In the preceding example, the container has to be capable of creating JPA