troublesome query is a function that returns a table, finding data which
is assembled from four or five tables through a view that has a join,
and then updating the resulting @.table from some other tables. There
are several queries inside the function, which are selected according
to which parameters are supplied (house #, street, zip, or perhaps parcel
number, or house #, street, town, city,...etc.). If a lot of parameters
are provided, and the property is not in the database, then several queries
may be attempted -- it keeps going until it runs out of queries or finds
something. Usually it takes ~1-2 sec for a hit, but maybe a minute in
some failure cases, depending on the distribution of data. (~100 mil
properties in the DB) Some queires operate on the assumption the input data
is slightly faulty, and take relatively a long time, e.g., if WHERE
ZIP=@.Zip fails, we try WHERE ZIP LIKE substring(@.Zip,1,3)+'%'. While
all this is going on the application may decide the DB is never going to
return, and time out; it also seems more likely to throw an exception the
longer it has to wait. Is there a way to cause the DB function to fail if
it takes more than a certain amount of time? I could also recast it as
a procedure, and check the time consumed after every query, and abandon
the search if a certain amount of time has elapsed.
Thanks in advance,
Jim Geissmanjim_geissman@.countrywide.com (Jim Geissman) wrote in message news:<b84bf9dc.0403031505.2838a043@.posting.google.com>...
> The C++ application calls the database to look up property data. One
> troublesome query is a function that returns a table, finding data which
> is assembled from four or five tables through a view that has a join,
> and then updating the resulting @.table from some other tables. There
> are several queries inside the function, which are selected according
> to which parameters are supplied (house #, street, zip, or perhaps parcel
> number, or house #, street, town, city,...etc.). If a lot of parameters
> are provided, and the property is not in the database, then several queries
> may be attempted -- it keeps going until it runs out of queries or finds
> something. Usually it takes ~1-2 sec for a hit, but maybe a minute in
> some failure cases, depending on the distribution of data. (~100 mil
> properties in the DB) Some queires operate on the assumption the input data
> is slightly faulty, and take relatively a long time, e.g., if WHERE
> ZIP=@.Zip fails, we try WHERE ZIP LIKE substring(@.Zip,1,3)+'%'. While
> all this is going on the application may decide the DB is never going to
> return, and time out; it also seems more likely to throw an exception the
> longer it has to wait. Is there a way to cause the DB function to fail if
> it takes more than a certain amount of time? I could also recast it as
> a procedure, and check the time consumed after every query, and abandon
> the search if a certain amount of time has elapsed.
> Thanks in advance,
> Jim Geissman
You don't give any information about your version of MSSQL, and the
client library you're using, but you may be able to set a suitable
timeout period on the client side. Alternatively, look at the "query
governor cost limit Option" in Books Online - this terminates queries
that run for more than a given number of seconds.
Simon|||jim_geissman@.countrywide.com (Jim Geissman) wrote in message news:<b84bf9dc.0403031505.2838a043@.posting.google.com>...
> The C++ application calls the database to look up property data. One
> troublesome query is a function that returns a table, finding data which
> is assembled from four or five tables through a view that has a join,
> and then updating the resulting @.table from some other tables. There
> are several queries inside the function, which are selected according
> to which parameters are supplied (house #, street, zip, or perhaps parcel
> number, or house #, street, town, city,...etc.). If a lot of parameters
> are provided, and the property is not in the database, then several queries
> may be attempted -- it keeps going until it runs out of queries or finds
> something. Usually it takes ~1-2 sec for a hit, but maybe a minute in
> some failure cases, depending on the distribution of data. (~100 mil
> properties in the DB) Some queires operate on the assumption the input data
> is slightly faulty, and take relatively a long time, e.g., if WHERE
> ZIP=@.Zip fails, we try WHERE ZIP LIKE substring(@.Zip,1,3)+'%'. While
> all this is going on the application may decide the DB is never going to
> return, and time out; it also seems more likely to throw an exception the
> longer it has to wait. Is there a way to cause the DB function to fail if
> it takes more than a certain amount of time? I could also recast it as
> a procedure, and check the time consumed after every query, and abandon
> the search if a certain amount of time has elapsed.
> Thanks in advance,
> Jim Geissman
See "remote query timeout Option" in the help text. However, relying
on this may cause inconsistent bahaviour.
This design pattern can also lead to heavy load on your database.
As a suggestion, have two separate sets of queries, one that assumes
good data (should be much quicker which you want to use most times?)
and one that may have incorrect data (will be slower, but not used
very often). In you screen have a checkbox to indicate what search
option to use. Alternatively perform better validation on the data
before submitting the form.|||That sounds interesting. I'll look into it.
> You don't give any information about your version of MSSQL, and the
> client library you're using, but you may be able to set a suitable
> timeout period on the client side. Alternatively, look at the "query
> governor cost limit Option" in Books Online - this terminates queries
> that run for more than a given number of seconds.
> Simon
No comments:
Post a Comment