Many times I have came across this with PostgreSQL if an aggregate like sum, count function is used on a single column of the table the limit clause makes the query real faster.
Just try this
select count(*) from my_table limit 1;
This would be more faster than just
select count(*) from my_table;
The reason limits decide the scan of data pages and reduces the I/O for the interface program (in my Case psql). What do you people say.
2 responses to “PostgreSQL and limit”
I tried this out on PostgreSQL 7.4.6 with a table that has more than 1 million rows.
W/O limit: 7,324.890 ms
With limit: 7,366.951 ms
With a difference of less than 50 ms I’d say that there is virtually no difference between these two. Either you are running this on a table with a lot of columns or perhaps a newer version of PostgreSQL (8.0?). I’m not sure that either of those would make a difference though.
Joseph Scott
http://joseph.randomnetworks.com/
I tried this out on PostgreSQL 7.4.6 with a table that has more than 1 million rows.
W/O limit: 7,324.890 ms
With limit: 7,366.951 ms
With a difference of less than 50 ms I’d say that there is virtually no difference between these two. Either you are running this on a table with a lot of columns or perhaps a newer version of PostgreSQL (8.0?). I’m not sure that either of those would make a difference though.
Joseph Scott
http://joseph.randomnetworks.com/
Joseph you are right as you would have seen the select query in my case is having a * i.e all columns targeted
this way I am alreadr increasing the job of PostgreSQL to select for all columns in the table.
But if I use a query something like
select col_1,col_2 from col_table limit 1;
then on targeting the columns I am itself optimizing the query and reducing time in this case LIMIT does not show any improvements.