Hi,
I hope someone can shed some light on a project that I've
been given, and how I can go about managing the structure
of the tables because it's driving me mad as to find a
good solution.
In short I've in the middle of creating a messaging system
with apporximately 20,000 users. One user will post a
message (text only) onto the server, and they can decide
who it goes to. This 'distribution' can be one, or many
recipients. It isn't email I hasten to add, just a simple
messaging system.
The trouble is, that the sender needs to know if the
recipient(s) have read the message.
It sounds simple, but there are three possibilities, 2 of
which could prove a huge overhaul on the server.
The first is to create a table called 'Messages', each
message has a unique identifier, and is replicated each
time to each and every recipient. So if we had 20,000
users, half of which had received the same message, the
table could be like...
messageID recipientID read
1 1 false
1 2 false
1 3 true
1 4 false
[...]
1 20000 true
As you can appreciate, that would be about 20000 records
per message! There could be over hundred messages a day,
and the user list is growing day by day!
The other option would be an individual table per user,
This would reduce the size of one table dramatically, but
we'd end up with 20000+ tables, for example
Table 1
messageID read
1 true
4 false
19 false
Table 2
2 false
91 false
109 true
etc...
I'm sure there has to be an easier way of doing this, any
ideas would be GREATLY appreciated.
TIAJason
I would think your message table would be the best way.
You are estimating up to 2,000,000 rows a day, that is not
a lot for SQL Server to handle, with a clustered index on
the receipientID (possibily a composite with messageID as
well), it should run just fine. The real question is how
long do you keep the messages? While 2,000,000 rows is not
a lot for SQL Server, after a few months of that kind of
volume, it would become quite large.
What are the plans for keeping or deleting or archiving
old messages?
Regards
John|||Jason
/*
The other option would be an individual table per user,
This would reduce the size of one table dramatically, but
we'd end up with 20000+ tables, for example
*/
Don't do it. It is very bad design issue .
If I undestand you correctly one user may has many messages and one message
may has many users. Am I right?
I'd go to create three tables Users,Messages ,Message_Users
create table Users
(
userid int not null primary key
username varchar(50) not null
.....
......
)
-- to establish many-many relationship between user and message
create table Message_Users
(
id int not null primary key,
userid int not null REFERENCES users (userid)
messageid int not null REFERENCES messages (messageid )
read bit 'default 0'
)
create table Messages
(
messageid int not null primary key,
messagename varchar(8000)
.....
.....
)
"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
news:0a9901c35b28$240ae8a0$a101280a@.phx.gbl...
> Hi,
> I hope someone can shed some light on a project that I've
> been given, and how I can go about managing the structure
> of the tables because it's driving me mad as to find a
> good solution.
> In short I've in the middle of creating a messaging system
> with apporximately 20,000 users. One user will post a
> message (text only) onto the server, and they can decide
> who it goes to. This 'distribution' can be one, or many
> recipients. It isn't email I hasten to add, just a simple
> messaging system.
> The trouble is, that the sender needs to know if the
> recipient(s) have read the message.
> It sounds simple, but there are three possibilities, 2 of
> which could prove a huge overhaul on the server.
> The first is to create a table called 'Messages', each
> message has a unique identifier, and is replicated each
> time to each and every recipient. So if we had 20,000
> users, half of which had received the same message, the
> table could be like...
> messageID recipientID read
> 1 1 false
> 1 2 false
> 1 3 true
> 1 4 false
> [...]
> 1 20000 true
> As you can appreciate, that would be about 20000 records
> per message! There could be over hundred messages a day,
> and the user list is growing day by day!
> The other option would be an individual table per user,
> This would reduce the size of one table dramatically, but
> we'd end up with 20000+ tables, for example
> Table 1
> messageID read
> 1 true
> 4 false
> 19 false
> Table 2
> 2 false
> 91 false
> 109 true
> etc...
> I'm sure there has to be an easier way of doing this, any
> ideas would be GREATLY appreciated.
> TIA|||> The trouble is, that the sender needs to know if the
> recipient(s) have read the message.
For all 20000 recipients' I suggest you have a good look at your business
requirements to see if this is really necessary.
Anyway, you should go with having one table for all the messages, the amount
of data you store per row is very small, only 9 bytes, and 2 million rows
per day is not something SQL Server 2000 can't handle. You also have to look
at how long you want to keep the information and when you can start cleaning
out the table or moving it to an archive table. I assume nobody is really
interested in messages of more than a few months old.
Imagine you would use 20000 different tables, one for each user to keep
track of each messages s/he has read, how do you get the information for the
sender about who has read the message?
SELECT read FROM table1 WHERE message_id = 1
UNION ALL
SELECT read FROM table2 WHERE message_id = 1
UNION ALL
SELECT read FROM table3 WHERE message_id = 1
etc.. ?
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
news:0a9901c35b28$240ae8a0$a101280a@.phx.gbl...
> Hi,
> I hope someone can shed some light on a project that I've
> been given, and how I can go about managing the structure
> of the tables because it's driving me mad as to find a
> good solution.
> In short I've in the middle of creating a messaging system
> with apporximately 20,000 users. One user will post a
> message (text only) onto the server, and they can decide
> who it goes to. This 'distribution' can be one, or many
> recipients. It isn't email I hasten to add, just a simple
> messaging system.
> The trouble is, that the sender needs to know if the
> recipient(s) have read the message.
> It sounds simple, but there are three possibilities, 2 of
> which could prove a huge overhaul on the server.
> The first is to create a table called 'Messages', each
> message has a unique identifier, and is replicated each
> time to each and every recipient. So if we had 20,000
> users, half of which had received the same message, the
> table could be like...
> messageID recipientID read
> 1 1 false
> 1 2 false
> 1 3 true
> 1 4 false
> [...]
> 1 20000 true
> As you can appreciate, that would be about 20000 records
> per message! There could be over hundred messages a day,
> and the user list is growing day by day!
> The other option would be an individual table per user,
> This would reduce the size of one table dramatically, but
> we'd end up with 20000+ tables, for example
> Table 1
> messageID read
> 1 true
> 4 false
> 19 false
> Table 2
> 2 false
> 91 false
> 109 true
> etc...
> I'm sure there has to be an easier way of doing this, any
> ideas would be GREATLY appreciated.
> TIA|||Thanks for the reply john.
I'd estimate that messages would be kept for 90 days. We
would try and enforce some housekeeping, but of course you
know users... they don't always listen, and it's a growing
concern (no pun intended) that the size of the table will
reach a stupid size. I agree 2,000,000 rows a day may not
be much for SQL to handle, but with 20000 users, each
posting to oneanother, or 'distribution lists', the
constant access on this table alone would be incredible.
I didn't know if there was some kind of alogrithm to work
it out. I've had scraps of paper on my floor all night,
trying to work it out.
I did think about having one huge hexadecimal string to
indicate those who had received the message, for example
34DC9AC145ED, if it were 4byte per record, would break
down to user id's 13532, 39617 and 17901. And I could say
that the aforementioned code, because it contained all
three user ID's, would mean that all three had NOT read
the message, but 34DC45ED would mean that one user (39617)
had read the message because they weren't listed. Thus,
the size of this field would reduce as more and more
people read the message that it was intended for!! (I
hope I explained that right!!)
Any ideas on how Outlook handles things like receipts in
it's tracking system?
TIA
>--Original Message--
>Jason
>I would think your message table would be the best way.
>You are estimating up to 2,000,000 rows a day, that is
not
>a lot for SQL Server to handle, with a clustered index on
>the receipientID (possibily a composite with messageID as
>well), it should run just fine. The real question is how
>long do you keep the messages? While 2,000,000 rows is
not
>a lot for SQL Server, after a few months of that kind of
>volume, it would become quite large.
>What are the plans for keeping or deleting or archiving
>old messages?
>Regards
>John
>.
>|||It's true to say that all 20000 recipients could received
at LEAST one message a day from someone else.
The 'message' table would be structured similar to...
messageID title description sender
(autoident) nvarchar nvarchar int
Where the sender field is joined to the userID in the
users table.
A 'similar' table to track messages would be...
messageID recipient
(int) (int)
I agree that there should be some good housekeeping to
delete messages, but this has to be on BOTH parties,
because the sender would need to clean out their 'sent'
items, and the recipients clean out their 'received'
items, and vice-versa.
>--Original Message--
>> The trouble is, that the sender needs to know if the
>> recipient(s) have read the message.
>For all 20000 recipients' I suggest you have a good look
at your business
>requirements to see if this is really necessary.
>Anyway, you should go with having one table for all the
messages, the amount
>of data you store per row is very small, only 9 bytes,
and 2 million rows
>per day is not something SQL Server 2000 can't handle.
You also have to look
>at how long you want to keep the information and when you
can start cleaning
>out the table or moving it to an archive table. I assume
nobody is really
>interested in messages of more than a few months old.
>Imagine you would use 20000 different tables, one for
each user to keep
>track of each messages s/he has read, how do you get the
information for the
>sender about who has read the message?
>SELECT read FROM table1 WHERE message_id = 1
>UNION ALL
>SELECT read FROM table2 WHERE message_id = 1
>UNION ALL
>SELECT read FROM table3 WHERE message_id = 1
>etc.. ?
>--
>Jacco Schalkwijk MCDBA, MCSD, MCSE
>Database Administrator
>Eurostop Ltd.
>
>"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
>news:0a9901c35b28$240ae8a0$a101280a@.phx.gbl...
>> Hi,
>> I hope someone can shed some light on a project that
I've
>> been given, and how I can go about managing the
structure
>> of the tables because it's driving me mad as to find a
>> good solution.
>> In short I've in the middle of creating a messaging
system
>> with apporximately 20,000 users. One user will post a
>> message (text only) onto the server, and they can decide
>> who it goes to. This 'distribution' can be one, or many
>> recipients. It isn't email I hasten to add, just a
simple
>> messaging system.
>> The trouble is, that the sender needs to know if the
>> recipient(s) have read the message.
>> It sounds simple, but there are three possibilities, 2
of
>> which could prove a huge overhaul on the server.
>> The first is to create a table called 'Messages', each
>> message has a unique identifier, and is replicated each
>> time to each and every recipient. So if we had 20,000
>> users, half of which had received the same message, the
>> table could be like...
>> messageID recipientID read
>> 1 1 false
>> 1 2 false
>> 1 3 true
>> 1 4 false
>> [...]
>> 1 20000 true
>> As you can appreciate, that would be about 20000 records
>> per message! There could be over hundred messages a
day,
>> and the user list is growing day by day!
>> The other option would be an individual table per user,
>> This would reduce the size of one table dramatically,
but
>> we'd end up with 20000+ tables, for example
>> Table 1
>> messageID read
>> 1 true
>> 4 false
>> 19 false
>> Table 2
>> 2 false
>> 91 false
>> 109 true
>> etc...
>> I'm sure there has to be an easier way of doing this,
any
>> ideas would be GREATLY appreciated.
>> TIA
>
>.
>|||> but with 20000 users, each
> posting to oneanother, or 'distribution lists', the
> constant access on this table alone would be incredible.
Yes, but they would all be accessing different rows, the only two people who
would access the same row are the sender and the recipient. If you, as John
suggested, have indexes on this table, access would be very quick and it
would hardly occur that users have to wait for each other to access a row.
Most of the access would be SELECTs, which can be concurrent anyway, and the
only changes that happen the table are the inserts of new messages and the
update of an existing messages whena recipient reads it, but these never
conflict.
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
news:09c801c35b2e$c99452b0$a401280a@.phx.gbl...
> Thanks for the reply john.
> I'd estimate that messages would be kept for 90 days. We
> would try and enforce some housekeeping, but of course you
> know users... they don't always listen, and it's a growing
> concern (no pun intended) that the size of the table will
> reach a stupid size. I agree 2,000,000 rows a day may not
> be much for SQL to handle, but with 20000 users, each
> posting to oneanother, or 'distribution lists', the
> constant access on this table alone would be incredible.
> I didn't know if there was some kind of alogrithm to work
> it out. I've had scraps of paper on my floor all night,
> trying to work it out.
> I did think about having one huge hexadecimal string to
> indicate those who had received the message, for example
> 34DC9AC145ED, if it were 4byte per record, would break
> down to user id's 13532, 39617 and 17901. And I could say
> that the aforementioned code, because it contained all
> three user ID's, would mean that all three had NOT read
> the message, but 34DC45ED would mean that one user (39617)
> had read the message because they weren't listed. Thus,
> the size of this field would reduce as more and more
> people read the message that it was intended for!! (I
> hope I explained that right!!)
> Any ideas on how Outlook handles things like receipts in
> it's tracking system?
> TIA
> >--Original Message--
> >Jason
> >
> >I would think your message table would be the best way.
> >You are estimating up to 2,000,000 rows a day, that is
> not
> >a lot for SQL Server to handle, with a clustered index on
> >the receipientID (possibily a composite with messageID as
> >well), it should run just fine. The real question is how
> >long do you keep the messages? While 2,000,000 rows is
> not
> >a lot for SQL Server, after a few months of that kind of
> >volume, it would become quite large.
> >
> >What are the plans for keeping or deleting or archiving
> >old messages?
> >
> >Regards
> >
> >John
> >.
> >|||200,000,000!!!
Looks like I'm going to have to have a serious rethink on
this (as is my client!!)
Thanks for the help.
>--Original Message--
>Jason
>90 days, thats under 200,000,000 million if the system is
>used 7 days a week, under 150,000,000 if only 5 days a
>week. It will have very small rows, just a lot of them.
>With the correct index and regular rebuilds, weekly at
>least, it should run all right as long as you get a
>powerful enough system.
>As well as a good design I think you need to do some
>serious sizing and modeling, to make sure you have a
>suitable server.
>You will need a good maint window to perform the
deletions
>and the index rebuilds.
>Good luck
>John
>.
>|||Jason,
I'm sorry to sound condescending, but you probably need a serious read up on
the capabilities and architecture of SQL Server (Inside SQL Server by Kalen
Delaney is a very good book) instead of a serious rethink about your
application. As John and me have been arguing for quite a few posts, 200
million small rows is not something to worry very much about on SQL Server,
we are not talking about Access here :) I am willing to bet you that the
alternative designs you have been thinking about will perform lots worse
than the design we have been supporting, even though they might have less
rows. Don't be "scared" of the large number of rows, SQL Server is designed
to handle them.
The largest table in the systems I am responsible for has about 25 million
rows and 60 columns (compared to 200 million rows and 3 columns in your
scenario), and although performance could probably still be improved
somewhat, it is acceptable. It is running on a 4 processor box with 4 GB of
memory, a reasonably heavy box, but not out of the ordinary, and it is
definitly not being pushed to its limits.
The key to good performance in this case is having proper indexes and time
to do index rebuilds.
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
news:0b7b01c35b36$15289a90$a101280a@.phx.gbl...
> 200,000,000!!!
> Looks like I'm going to have to have a serious rethink on
> this (as is my client!!)
> Thanks for the help.
>
> >--Original Message--
> >Jason
> >
> >90 days, thats under 200,000,000 million if the system is
> >used 7 days a week, under 150,000,000 if only 5 days a
> >week. It will have very small rows, just a lot of them.
> >With the correct index and regular rebuilds, weekly at
> >least, it should run all right as long as you get a
> >powerful enough system.
> >
> >As well as a good design I think you need to do some
> >serious sizing and modeling, to make sure you have a
> >suitable server.
> >
> >You will need a good maint window to perform the
> deletions
> >and the index rebuilds.
> >
> >Good luck
> >
> >John
> >.
> >|||Jason,
I would assume that the users do not randomly select 1000 recipient among
the 20000. They may send a message to a group that has a certain logical
meaning, like all in dept. x, all with last name starting with D, etc.
Beyond that, they may also send a message to everyone, to a handful (1 or
several) of interactively selected.
If this assumption holds true, I suggest the following:
A recipients table, holding all the recipients;
A recipient table, holding recipient groups and their members (one record
for each recipient);
A messageSent table, holding MessageId, Recipient (one record for each
recipient group and for each recipient if they are specified outside group);
A messageRead table, holding messageId, Recipient (one record for each
individual recipient who has read the message).
This way you significantly reduce the original sent entries, and keep read
record for those who did read. You can consider put the first two tables
into one (by now you should realize that the recipients and group is very
similar to sql server's users and groups structure).
Quentin
"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
news:0a9901c35b28$240ae8a0$a101280a@.phx.gbl...
> Hi,
> I hope someone can shed some light on a project that I've
> been given, and how I can go about managing the structure
> of the tables because it's driving me mad as to find a
> good solution.
> In short I've in the middle of creating a messaging system
> with apporximately 20,000 users. One user will post a
> message (text only) onto the server, and they can decide
> who it goes to. This 'distribution' can be one, or many
> recipients. It isn't email I hasten to add, just a simple
> messaging system.
> The trouble is, that the sender needs to know if the
> recipient(s) have read the message.
> It sounds simple, but there are three possibilities, 2 of
> which could prove a huge overhaul on the server.
> The first is to create a table called 'Messages', each
> message has a unique identifier, and is replicated each
> time to each and every recipient. So if we had 20,000
> users, half of which had received the same message, the
> table could be like...
> messageID recipientID read
> 1 1 false
> 1 2 false
> 1 3 true
> 1 4 false
> [...]
> 1 20000 true
> As you can appreciate, that would be about 20000 records
> per message! There could be over hundred messages a day,
> and the user list is growing day by day!
> The other option would be an individual table per user,
> This would reduce the size of one table dramatically, but
> we'd end up with 20000+ tables, for example
> Table 1
> messageID read
> 1 true
> 4 false
> 19 false
> Table 2
> 2 false
> 91 false
> 109 true
> etc...
> I'm sure there has to be an easier way of doing this, any
> ideas would be GREATLY appreciated.
> TIA|||Well I've had a jiggle about with the design, and have
come with a compromise regarding it all.
I 'think' that the best way to approach this is to have
half SQL and half XML. Let me explain why and how...
The system I know 100% will get abuse by several members,
namely 'messaging' all 20,000 contacts on a very regular
basis. By regular, I mean 2 to 3 times a day. In one day
alone, there could be over 200,000 rows added to the table.
Therefore, I thought about having messages in a 'holding
pen', which when read, are saved on the server as an XML
file, with the option to download later. This would allow
the work of the SQL server to be at normal levels, and the
size of the table not reaching horrific proportions.
When the user logs in to the system, it checks to see if
there are any new messages on the server (easy enough), if
there are, then it does NOT transfer them to the XML file
until they have read the message. This would also save a
field for "read message" because if the message was on the
server, then obviously it hasn't been read. I appreciate
I'm not saving a great deal of space, 200,000 fields of
boolean datatype wouldn't be a great threat to the server,
but every penny counts!
Thanks for the help, it has been apprecitated.
>--Original Message--
>Jason,
>I'm sorry to sound condescending, but you probably need a
serious read up on
>the capabilities and architecture of SQL Server (Inside
SQL Server by Kalen
>Delaney is a very good book) instead of a serious rethink
about your
>application. As John and me have been arguing for quite a
few posts, 200
>million small rows is not something to worry very much
about on SQL Server,
>we are not talking about Access here :) I am willing to
bet you that the
>alternative designs you have been thinking about will
perform lots worse
>than the design we have been supporting, even though they
might have less
>rows. Don't be "scared" of the large number of rows, SQL
Server is designed
>to handle them.
>The largest table in the systems I am responsible for has
about 25 million
>rows and 60 columns (compared to 200 million rows and 3
columns in your
>scenario), and although performance could probably still
be improved
>somewhat, it is acceptable. It is running on a 4
processor box with 4 GB of
>memory, a reasonably heavy box, but not out of the
ordinary, and it is
>definitly not being pushed to its limits.
>The key to good performance in this case is having proper
indexes and time
>to do index rebuilds.
>--
>Jacco Schalkwijk MCDBA, MCSD, MCSE
>Database Administrator
>Eurostop Ltd.
>
>"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
>news:0b7b01c35b36$15289a90$a101280a@.phx.gbl...
>> 200,000,000!!!
>> Looks like I'm going to have to have a serious rethink
on
>> this (as is my client!!)
>> Thanks for the help.
>>
>> >--Original Message--
>> >Jason
>> >
>> >90 days, thats under 200,000,000 million if the system
is
>> >used 7 days a week, under 150,000,000 if only 5 days a
>> >week. It will have very small rows, just a lot of them.
>> >With the correct index and regular rebuilds, weekly at
>> >least, it should run all right as long as you get a
>> >powerful enough system.
>> >
>> >As well as a good design I think you need to do some
>> >serious sizing and modeling, to make sure you have a
>> >suitable server.
>> >
>> >You will need a good maint window to perform the
>> deletions
>> >and the index rebuilds.
>> >
>> >Good luck
>> >
>> >John
>> >.
>> >
>
>.
>|||So if I understand you correctly you are going to save the cost of a 1 byte
column in a database table by storing 200,000 XML files?
I am quite disappointed that you come here on this newsgroup for expert
advice (which is totally free as well), you get good advice from some
people, and basically you just ignore it and go on believing what you
believed before. You 'believe' things, because you have never tested if the
solution that has been proposed here would work properly. It would only take
about half a day to set up a test with a million row table, so you could
have solid results to base your design decisions on.
Let me repeat it one more time:
200,000 rows per day is NOT A PROBLEM for SQL Server.
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
news:081801c35bed$63dc8900$a401280a@.phx.gbl...
> Well I've had a jiggle about with the design, and have
> come with a compromise regarding it all.
> I 'think' that the best way to approach this is to have
> half SQL and half XML. Let me explain why and how...
> The system I know 100% will get abuse by several members,
> namely 'messaging' all 20,000 contacts on a very regular
> basis. By regular, I mean 2 to 3 times a day. In one day
> alone, there could be over 200,000 rows added to the table.
> Therefore, I thought about having messages in a 'holding
> pen', which when read, are saved on the server as an XML
> file, with the option to download later. This would allow
> the work of the SQL server to be at normal levels, and the
> size of the table not reaching horrific proportions.
> When the user logs in to the system, it checks to see if
> there are any new messages on the server (easy enough), if
> there are, then it does NOT transfer them to the XML file
> until they have read the message. This would also save a
> field for "read message" because if the message was on the
> server, then obviously it hasn't been read. I appreciate
> I'm not saving a great deal of space, 200,000 fields of
> boolean datatype wouldn't be a great threat to the server,
> but every penny counts!
> Thanks for the help, it has been apprecitated.
>
> >--Original Message--
> >Jason,
> >
> >I'm sorry to sound condescending, but you probably need a
> serious read up on
> >the capabilities and architecture of SQL Server (Inside
> SQL Server by Kalen
> >Delaney is a very good book) instead of a serious rethink
> about your
> >application. As John and me have been arguing for quite a
> few posts, 200
> >million small rows is not something to worry very much
> about on SQL Server,
> >we are not talking about Access here :) I am willing to
> bet you that the
> >alternative designs you have been thinking about will
> perform lots worse
> >than the design we have been supporting, even though they
> might have less
> >rows. Don't be "scared" of the large number of rows, SQL
> Server is designed
> >to handle them.
> >
> >The largest table in the systems I am responsible for has
> about 25 million
> >rows and 60 columns (compared to 200 million rows and 3
> columns in your
> >scenario), and although performance could probably still
> be improved
> >somewhat, it is acceptable. It is running on a 4
> processor box with 4 GB of
> >memory, a reasonably heavy box, but not out of the
> ordinary, and it is
> >definitly not being pushed to its limits.
> >
> >The key to good performance in this case is having proper
> indexes and time
> >to do index rebuilds.
> >
> >--
> >Jacco Schalkwijk MCDBA, MCSD, MCSE
> >Database Administrator
> >Eurostop Ltd.
> >
> >
> >"Jason Hirst" <jasonmhirst@.hotmail.com> wrote in message
> >news:0b7b01c35b36$15289a90$a101280a@.phx.gbl...
> >> 200,000,000!!!
> >>
> >> Looks like I'm going to have to have a serious rethink
> on
> >> this (as is my client!!)
> >>
> >> Thanks for the help.
> >>
> >>
> >> >--Original Message--
> >> >Jason
> >> >
> >> >90 days, thats under 200,000,000 million if the system
> is
> >> >used 7 days a week, under 150,000,000 if only 5 days a
> >> >week. It will have very small rows, just a lot of them.
> >> >With the correct index and regular rebuilds, weekly at
> >> >least, it should run all right as long as you get a
> >> >powerful enough system.
> >> >
> >> >As well as a good design I think you need to do some
> >> >serious sizing and modeling, to make sure you have a
> >> >suitable server.
> >> >
> >> >You will need a good maint window to perform the
> >> deletions
> >> >and the index rebuilds.
> >> >
> >> >Good luck
> >> >
> >> >John
> >> >.
> >> >
> >
> >
> >.
> >|||Hi,
Why not you just take note of how many has not read your message?
the record will be lesser and may be zero.
remove those id record that have been read..
won't it be better?
you table size will always be very small.
Best Regards,
Wind
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment