Deep Pagination Optimization
Pagination queries have a problem - when the data volume is large, if the pagination range is too far back (i.e. offset is too large), performance issues can occur.
Therefore, Jimmer supports deep pagination optimization. Set a threshold offset-optimizing-threshold
for the pagination offset. Once the offset reaches this threshold, the implementation of the pagination query is changed.
Set Threshold
-
Using Spring Boot Starter:
Modify
application.yml
(or application.properties), add config itemoffset-optimizing-threshold
:jimmer:
offset-optimizing-threshold: 1000 -
Using low-level API:
- Java
- Kotlin
JSqlClient sqlClient = JSqlClient
.newBuilder()
.setOffsetOptimizingThreshold(1000)
...Other config omitted...
.build();val sqlClient = newKSqlClient {
setOffsetOptimizingThreshold(1000)
...Other config omitted...
}
If this threshold is not set, the default is Integer.MAX_VALUE
, meaning this feature is disabled.
Effectiveness
To discuss this feature, focusing on the data query instead of the count query is sufficient. Let's look at this simple pagination query:
- Java
- Kotlin
public List<Book> findBooks(int limit, int offset) {
BookTable table = Tables.BOOK_TABLE;
return sqlClient
.createQuery(table)
.select(table)
.limit(limit, offset)
.execute();
}
fun findBooks(limit: Int, offset: Int): List<Book> =
sqlClient.createQuery(Book::class) {
select(table)
}
.limit(limit, offset)
.execute()
Pagination query support differs for different dialects. Oracle is more complex. See Pagination/Usage#Dialects.
To simplify discussion, assume using org.babyfish.jimmer.sql.H2Dialect
. Pagination queries under it generate simple SQL:
-
When offset is less than 1000, e.g. calling
findBooks(10, 90)
, the generated SQL:select
tb_1_.ID,
tb_1_.NAME,
tb_1_.EDITION,
tb_1_.PRICE,
tb_1_.STORE_ID
from BOOK tb_1_
limit ? /* 10 */ offset ? /* 90 */ -
When offset is greater than or equal to 1000, e.g. calling
findBooks(10, 1000)
, the generated SQL:select
optimize_.ID,
optimize_.NAME,
optimize_.EDITION,
optimize_.PRICE,
optimize_.STORE_ID
from (
select
tb_1_.ID optimize_core_id_
from BOOK tb_1_
limit ? /* 10 */ offset ? /* 1000 */
) optimize_core_ inner join BOOK optimize_
on optimize_.ID = optimize_core_.optimize_core_id_
As you can see, when offset is too large, Jimmer queries ids of the page first, then converts the ids to objects. This is useful when data volume is large.
Set to 0
For some databases (e.g. MySQL), if data volume is large, performance is poor even when querying the first page (offset is 0).
In this case, simply set offset-optimizing-threshold
to 0 directly.