In a lot of cases for the sake of performance I use UPDATE, INSERT, and
DELETE triggers for maintaining a denormalized column in another table that
stores summary information (such as an inventory transaction table and a
total on-hand balance column in an item master table).
I was thinking, since I've made lots of variations of this all of which are
basically the same in form, it would be convenient for MS to supply a
special 'summary' column type that automatically monitors the other table's
column being summarized and stays up to date so I don't have to create
triggers every time. I know you can get summaries just by writing an SP or
view to retrieve them, but that's really inefficient when the table being
summarized gets large.
I'm posting this idea on the off chance I've missed a feature in SS2K that
does something like this already... also, does anyone know if this is a
feature known to be coming in Yukon?
TIA,
BobHi
Have you looked at Computed Columns?
BOL has info on it.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Bob" <noone@.nowhere.com> wrote in message
news:OHRHI$CIFHA.2136@.TK2MSFTNGP14.phx.gbl...
> In a lot of cases for the sake of performance I use UPDATE, INSERT, and
> DELETE triggers for maintaining a denormalized column in another table
that
> stores summary information (such as an inventory transaction table and a
> total on-hand balance column in an item master table).
> I was thinking, since I've made lots of variations of this all of which
are
> basically the same in form, it would be convenient for MS to supply a
> special 'summary' column type that automatically monitors the other
table's
> column being summarized and stays up to date so I don't have to create
> triggers every time. I know you can get summaries just by writing an SP or
> view to retrieve them, but that's really inefficient when the table being
> summarized gets large.
> I'm posting this idea on the off chance I've missed a feature in SS2K that
> does something like this already... also, does anyone know if this is a
> feature known to be coming in Yukon?
> TIA,
> Bob
>|||Have you considered using indexed views?
Always maintain a healthy degree of skepticism in the face of arguments
that favour denormalization "for the sake of performance".
Denormalization is a trade off. One query's performance is improved but
elsewhere performance and integrity suffers. Denormalization can also
be a slippery slope toward more denormalization. Certainly if you
denormalize "in a lot of cases" then I suggest you take a long hard
look at whether you have the correct design and implementation. There
are usually better solutions.
David Portas
SQL Server MVP
--|||Second that, You should only denormalize AFTER a performance problem has
surfaced in a normalized data structure, and then only after examining all
the other options. There's a;most always a way to improve performance in a
normalized database schema, using properly designed and optimized indices.
And even if that approach isn;t sufficient to deal with the problem,
denormalization applied to a fully normalized schema will always be more
effective and successful than denormalization done out of the gate.
"David Portas" wrote:
> Have you considered using indexed views?
> Always maintain a healthy degree of skepticism in the face of arguments
> that favour denormalization "for the sake of performance".
> Denormalization is a trade off. One query's performance is improved but
> elsewhere performance and integrity suffers. Denormalization can also
> be a slippery slope toward more denormalization. Certainly if you
> denormalize "in a lot of cases" then I suggest you take a long hard
> look at whether you have the correct design and implementation. There
> are usually better solutions.
> --
> David Portas
> SQL Server MVP
> --
>|||>> I use UPDATE, INSERT, and DELETE triggers for maintaining a
denormalized column in another table that stores summary information
(such as an inventory transaction table and a
total on-hand balance column in an item master table). <<
The only reason to store summary infomation is that this is a data
warehouse, that is so big that the recomputation would be too
expensive. But the data is static in a DW.
I would stick with nice, portable and always correct VIEWs instead of
proprietary triggers that fire everytime the table is touched.|||The funny thing is that for the cost of the coding you could probably
upgrade hardware enough to avoid the whole denormalization thing :)
----
Louis Davidson - drsql@.hotmail.com
SQL Server MVP
Compass Technology Management - www.compass.net
Pro SQL Server 2000 Database Design -
http://www.apress.com/book/bookDisplay.html?bID=266
Blog - http://spaces.msn.com/members/drsql/
Note: Please reply to the newsgroups only unless you are interested in
consulting services. All other replies may be ignored :)
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1109888063.340480.52940@.g14g2000cwa.googlegroups.com...
> Have you considered using indexed views?
> Always maintain a healthy degree of skepticism in the face of arguments
> that favour denormalization "for the sake of performance".
> Denormalization is a trade off. One query's performance is improved but
> elsewhere performance and integrity suffers. Denormalization can also
> be a slippery slope toward more denormalization. Certainly if you
> denormalize "in a lot of cases" then I suggest you take a long hard
> look at whether you have the correct design and implementation. There
> are usually better solutions.
> --
> David Portas
> SQL Server MVP
> --
>|||All of my practical experience indicates to me that denormalization is a
good thing for storing aggregate information; I've seen it to be simple,
nothing but reliable, and dramatically increase read performance without
significantly impacting write performance.
As such, I don't find your argument compelling (partly because I'm not
concerned with portability). I'm clearly not one of such distinction and
experience as yourself; I would ask you to elaborate your point of view so I
can better understand it.
Bob
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1109898958.939621.87460@.o13g2000cwo.googlegroups.com...
> denormalized column in another table that stores summary information
> (such as an inventory transaction table and a
> total on-hand balance column in an item master table). <<
> The only reason to store summary infomation is that this is a data
> warehouse, that is so big that the recomputation would be too
> expensive. But the data is static in a DW.
> I would stick with nice, portable and always correct VIEWs instead of
> proprietary triggers that fire everytime the table is touched.
>|||>> All of my practical experience indicates to me that denormalization
is a good thing for storing aggregate information; I've seen it to be
simple, nothing but reliable, and dramatically increase read
performance without significantly impacting write performance. <<
I have seen the opposite. Triggers constantly firing slow things down.
The time to write a value is orders of magnitude greater than the time
to compute it. The extra storage starts to add up. Etc. But more
than that, data integrity gets shot in the foot. Example: Orders carry
the total in a column that is supposed to equal the sum of the order
details. I put a trigger on OrderDetails to modify Orders. But I have
no trigger on Orders, so someone can change that total directly -- and
they will. As I start to use only triggers for data integrity, I find
that more and more business rules need more than one trigger apiece.
That is a little hard in SQL Server and a serious problem in more
powerful SQL products that have BEFORE and AFTER, as well as multiple
trigger options.
not concerned with portability). I'm clearly not one of such
distinction and experience as yourself; I would ask you to elaborate
your point of view so I can better understand it. <<
Portability and standard code are always issues. You port from one
release of the same software to another. You hire programmers who do
not know your local dialect. Unless the company business plan is to
stagnate and die, you will port and maintain code -- this is 80% of the
total cost of a system over its lifetime. Pros write code for other
people and amateurs write code to amuse themselves.|||"--CELKO--" <jcelko212@.earthlink.net> wrote in message
> <...>
Points taken, thank you.
> Portability and standard code are always issues. You port from one
> release of the same software to another. You hire programmers who do
> not know your local dialect. Unless the company business plan is to
> stagnate and die, you will port and maintain code -- this is 80% of the
> total cost of a system over its lifetime. Pros write code for other
> people and amateurs write code to amuse themselves.
I have no problem telling my customers, 'Microsoft only'. So far I've
received nothing but nods of approval. I will port and maintain code, but
only to other MS products.
I know, I'm going to hell...
Bob|||I have never found anything I couldn't do using Microsoft's triggers. I
agree completely that keeping summary data is usually wrong, and I have only
had one case where it was necessary. We had a manufacturing application
that calculated stuff that took the last fifty readings and the last fifty
calculated values into consideration (one SQL Statement was 200+lines.)
Needless to say that it took way too long to recalculate these values on
demand. So we had a trigger call the summary procedure when values were
entered.
Either way, it is always my advice to never denormalize your data for
performance until you have exhausted all of the usual tips. Indexing, views
(indexed too,) correct hardware, well built apps, etc.first. If it is
needed, it is needed, but seldom is that true.
----
Louis Davidson - drsql@.hotmail.com
SQL Server MVP
Compass Technology Management - www.compass.net
Pro SQL Server 2000 Database Design -
http://www.apress.com/book/bookDisplay.html?bID=266
Blog - http://spaces.msn.com/members/drsql/
Note: Please reply to the newsgroups only unless you are interested in
consulting services. All other replies may be ignored :)
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1110035642.897731.285480@.z14g2000cwz.googlegroups.com...
> is a good thing for storing aggregate information; I've seen it to be
> simple, nothing but reliable, and dramatically increase read
> performance without significantly impacting write performance. <<
> I have seen the opposite. Triggers constantly firing slow things down.
> The time to write a value is orders of magnitude greater than the time
> to compute it. The extra storage starts to add up. Etc. But more
> than that, data integrity gets shot in the foot. Example: Orders carry
> the total in a column that is supposed to equal the sum of the order
> details. I put a trigger on OrderDetails to modify Orders. But I have
> no trigger on Orders, so someone can change that total directly -- and
> they will. As I start to use only triggers for data integrity, I find
> that more and more business rules need more than one trigger apiece.
> That is a little hard in SQL Server and a serious problem in more
> powerful SQL products that have BEFORE and AFTER, as well as multiple
> trigger options.
>
> not concerned with portability). I'm clearly not one of such
> distinction and experience as yourself; I would ask you to elaborate
> your point of view so I can better understand it. <<
> Portability and standard code are always issues. You port from one
> release of the same software to another. You hire programmers who do
> not know your local dialect. Unless the company business plan is to
> stagnate and die, you will port and maintain code -- this is 80% of the
> total cost of a system over its lifetime. Pros write code for other
> people and amateurs write code to amuse themselves.
>
Sunday, February 12, 2012
auto-summary columns
Labels:
anddelete,
auto-summary,
cases,
column,
columns,
database,
denormalized,
insert,
maintaining,
microsoft,
mysql,
oracle,
performance,
sake,
server,
sql,
table,
triggers,
update
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment