Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
I'm working on an app that extract records from an Oracle database and then are exported as one single tabulated file.
However, when I attempt to read from the DB using JdbcPagingItemReader and write to a file I only get the number of records specified in pageSize. So if the pageSize is 10, then I get a file with 10 lines and the rest of the records seem to be ignored. So far, I haven't been able to find whats is really going on and any help would be most welcome.
Here is the JdbcPagingItemReader config:
<bean id="databaseItemReader"
class="org.springframework.batch.item.database.JdbcPagingItemReader" >
<property name="dataSource" ref="dataSourceTest" />
<property name="queryProvider">
class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSourceTest" />
<property name="selectClause" value="SELECT *" />
<property name="fromClause" value="FROM *****" />
<property name="whereClause" value="where snapshot_id=:name" />
<property name="sortKey" value="snapshot_id" />
</bean>
</property>
<property name="parameterValues">
<entry key="name" value="18596" />
</property>
<property name="pageSize" value="100000" />
<property name="rowMapper">
<bean class="com.mkyong.ViewRowMapper" />
</property>
<bean id="itemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<!-- write to this csv file -->
<property name="resource" value="file:cvs/report.csv" />
<property name="shouldDeleteIfExists" value="true" />
<property name="lineAggregator">
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="ID" />
</bean>
</property>
</bean>
</property>
<job id="testJob" xmlns="http://www.springframework.org/schema/batch">
<step id="step1">
<tasklet>
<chunk reader="databaseItemReader" writer="itemWriter" commit-interval="1" />
</tasklet>
</step>
thanks
–
–
–
Your setting seem incorrect for whereClause and sort key can not be same because pagesize works hand to hand with your sorting column name.
Check how is your data(in corresponding table) looks like.
In spring batch , as per your configuration, spring will create and execute as given below..
first query executed with pagesize = 10 , is like following
SELECT top 10 FROM tableName where snapshot_id=18596 snapshot_id > 10
Second /remaining query executed depends on your sort key.
SELECT * FROM tableName where snapshot_id=18596 snapshot_id > 10
SELECT * FROM tableName where snapshot_id=18596 snapshot_id > 20
and so on.. try running this query in database , doesn't it look weird . :-)
If you don't need where clause, remove it.
And if possible keep page size and commit-interval same, because that's how you decide to process and persist. But of course that depends on your design. So you decide.
@StepScope
ItemReader<Account> ItemReader(@Value("#{jobParameters[id]}") String id) {
JdbcPagingItemReader<Account> databaseReader = new JdbcPagingItemReader<>();
databaseReader.setDataSource(dataSource);
databaseReader.setPageSize(100);
databaseReader.setFetchSize(100);
PagingQueryProvider queryProvider = createQueryProvider(id);
databaseReader.setQueryProvider(queryProvider);
databaseReader.setRowMapper(new BeanPropertyRowMapper<>(Account.class));
return databaseReader;
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.