Thursday, February 16, 2012

Capacity planning question

I am working on a RFI for a SQL Server 2000 database apllication, I am
looking for some general answer the question below:
Capacity and resource planning for SQL Server 2000
1) resource requirements for 500, 1000, 2000 concurrent users (database,
memory, CPU, etc.)
2) deployment requirements for 500, 1000, 2000 concurrent users (server
configuration, architecture model, etc.)You will probably get very little, except it depends on the transaction, are
they reads, or writes? .. Do they use transaction control or not, how long
are the transactions, etc.
Other than that.
SQL loves memory.
More processors are better (Generally even if they are slower) than fewer
faster processors.
Multi-core processors are good
More on-board cache is good.
Keep your transaction logs mirrored on different drives than your data
Configure disk not only for space but for throughput - you might need more
disk heads to carry the volume, even if you have enough space with fewer
drives.
Just some general guidelines.
--
Wayne Snyder MCDBA, SQL Server MVP
Mariner, Charlotte, NC
I support the Professional Association for SQL Server ( PASS) and it''s
community of SQL Professionals.
"George Kwong" wrote:
> I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||I will add, that for an installation with those projected sizes and issues,
if you do not bring in someone with adequate experience to assist in the
design, planning, and deployment, you will be making a major mistake.
--
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another Certification Exam
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:O15VKVplGHA.3816@.TK2MSFTNGP02.phx.gbl...
>I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||We developed the applcation under VB, we are trying to bid on a customer's
job. is there a way to do some test to find out the resource usage?
No, we use very minimum transaction controls. transaction are relative
small, we do both read and writes.
thanks.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
> You will probably get very little, except it depends on the transaction,
> are
> they reads, or writes? .. Do they use transaction control or not, how long
> are the transactions, etc.
> Other than that.
> SQL loves memory.
> More processors are better (Generally even if they are slower) than fewer
> faster processors.
> Multi-core processors are good
> More on-board cache is good.
> Keep your transaction logs mirrored on different drives than your data
> Configure disk not only for space but for throughput - you might need more
> disk heads to carry the volume, even if you have enough space with fewer
> drives.
> Just some general guidelines.
> --
> Wayne Snyder MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> I support the Professional Association for SQL Server ( PASS) and it''s
> community of SQL Professionals.
>
> "George Kwong" wrote:
>> I am working on a RFI for a SQL Server 2000 database apllication, I am
>> looking for some general answer the question below:
>> Capacity and resource planning for SQL Server 2000
>> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
>> memory, CPU, etc.)
>> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
>> configuration, architecture model, etc.)
>>
>>|||Hi George
Are you able to benchmark other customers' installations of your application
& project the performance characteristics from those installations against
the one you're bidding on?
I'd be tracking various perfmon counters & SQL diagnostics for this,
including at least:
Perfmon:
SQLBufferManager counter object, especially Buffer Page Life Expectancy to
determine memory characteristics
CPU Utilisation - collect system wide counter & also the sqlservr process'
CPU utilisation counter
Physical & Logical disk counters - expecially disk bytes read / write p/sec
& disk queues
There are other useful counters, but these are fundamental to pulling
together an informative picture on how your existing installations are
operating under specific hardware specs.
I'd also be taking a close look at how SQL Server is using memory
internally, using dbcc memorystatus to ensure you understand how your
system's using memory.
Performing some SQL Traces might also help you to ensure your application is
well tuned, which is important when drawing benchmark conclusions.
HTH
Regards,
Greg Linwood
SQL Server MVP
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>> You will probably get very little, except it depends on the transaction,
>> are
>> they reads, or writes? .. Do they use transaction control or not, how
>> long
>> are the transactions, etc.
>> Other than that.
>> SQL loves memory.
>> More processors are better (Generally even if they are slower) than fewer
>> faster processors.
>> Multi-core processors are good
>> More on-board cache is good.
>> Keep your transaction logs mirrored on different drives than your data
>> Configure disk not only for space but for throughput - you might need
>> more
>> disk heads to carry the volume, even if you have enough space with fewer
>> drives.
>> Just some general guidelines.
>> --
>> Wayne Snyder MCDBA, SQL Server MVP
>> Mariner, Charlotte, NC
>> I support the Professional Association for SQL Server ( PASS) and it''s
>> community of SQL Professionals.
>>
>> "George Kwong" wrote:
>> I am working on a RFI for a SQL Server 2000 database apllication, I am
>> looking for some general answer the question below:
>> Capacity and resource planning for SQL Server 2000
>> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
>> memory, CPU, etc.)
>> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
>> configuration, architecture model, etc.)
>>
>>
>|||"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
>
Yes. MS Press had a book on this for SQL 2000 and I assume there is one for
SQL 2005.
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
>
Well, fisrt pass, figure, "how many bytes will be read and written" for each
transaction.
How many transactions/sec do you need to cover?
Things like indices may greatly impact that. As will caching.
But first pass, it can give you a sense of stuff like disk I/o which is
generally the slowest part of a system.
If you're reading/writing say 100 bytes/transaction and doing 100/sec, well
you need 10,000 byte throughput on your disks.
This ain't much.
If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
another kettle of fish.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
> > You will probably get very little, except it depends on the transaction,
> > are
> > they reads, or writes? .. Do they use transaction control or not, how
long
> > are the transactions, etc.
> >
> > Other than that.
> >
> > SQL loves memory.
> > More processors are better (Generally even if they are slower) than
fewer
> > faster processors.
> > Multi-core processors are good
> > More on-board cache is good.
> > Keep your transaction logs mirrored on different drives than your data
> > Configure disk not only for space but for throughput - you might need
more
> > disk heads to carry the volume, even if you have enough space with fewer
> > drives.
> >
> > Just some general guidelines.
> > --
> > Wayne Snyder MCDBA, SQL Server MVP
> > Mariner, Charlotte, NC
> >
> > I support the Professional Association for SQL Server ( PASS) and it''s
> > community of SQL Professionals.
> >
> >
> > "George Kwong" wrote:
> >
> >> I am working on a RFI for a SQL Server 2000 database apllication, I am
> >> looking for some general answer the question below:
> >>
> >> Capacity and resource planning for SQL Server 2000
> >>
> >> 1) resource requirements for 500, 1000, 2000 concurrent users
(database,
> >> memory, CPU, etc.)
> >> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> >> configuration, architecture model, etc.)
> >>
> >>
> >>
> >>
>|||it is actually the nature of my program worrys me. because, my program does
not have a complex transaction requirement, but the most significant part of
my program is writing and reading binary data, namely, a photo graph, it is
typically at about 35- 50 k each binary file (it is a jpeg image). this in
term will make all my other data type not significant by comparison
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:uw1ZHaylGHA.3752@.TK2MSFTNGP02.phx.gbl...
> "George Kwong" <geokwo@.Lexingtontech.com> wrote in message
> news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
>> We developed the applcation under VB, we are trying to bid on a
>> customer's
>> job. is there a way to do some test to find out the resource usage?
> Yes. MS Press had a book on this for SQL 2000 and I assume there is one
> for
> SQL 2005.
>> No, we use very minimum transaction controls. transaction are relative
>> small, we do both read and writes.
> Well, fisrt pass, figure, "how many bytes will be read and written" for
> each
> transaction.
> How many transactions/sec do you need to cover?
> Things like indices may greatly impact that. As will caching.
> But first pass, it can give you a sense of stuff like disk I/o which is
> generally the slowest part of a system.
> If you're reading/writing say 100 bytes/transaction and doing 100/sec,
> well
> you need 10,000 byte throughput on your disks.
> This ain't much.
> If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
> another kettle of fish.
>
>> thanks.
>>
>> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
>> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>> > You will probably get very little, except it depends on the
>> > transaction,
>> > are
>> > they reads, or writes? .. Do they use transaction control or not, how
> long
>> > are the transactions, etc.
>> >
>> > Other than that.
>> >
>> > SQL loves memory.
>> > More processors are better (Generally even if they are slower) than
> fewer
>> > faster processors.
>> > Multi-core processors are good
>> > More on-board cache is good.
>> > Keep your transaction logs mirrored on different drives than your data
>> > Configure disk not only for space but for throughput - you might need
> more
>> > disk heads to carry the volume, even if you have enough space with
>> > fewer
>> > drives.
>> >
>> > Just some general guidelines.
>> > --
>> > Wayne Snyder MCDBA, SQL Server MVP
>> > Mariner, Charlotte, NC
>> >
>> > I support the Professional Association for SQL Server ( PASS) and it''s
>> > community of SQL Professionals.
>> >
>> >
>> > "George Kwong" wrote:
>> >
>> >> I am working on a RFI for a SQL Server 2000 database apllication, I am
>> >> looking for some general answer the question below:
>> >>
>> >> Capacity and resource planning for SQL Server 2000
>> >>
>> >> 1) resource requirements for 500, 1000, 2000 concurrent users
> (database,
>> >> memory, CPU, etc.)
>> >> 2) deployment requirements for 500, 1000, 2000 concurrent users
>> >> (server
>> >> configuration, architecture model, etc.)
>> >>
>> >>
>> >>
>> >>
>>
>|||"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:%23oAEynOmGHA.492@.TK2MSFTNGP05.phx.gbl...
> it is actually the nature of my program worrys me. because, my program
does
> not have a complex transaction requirement, but the most significant part
of
> my program is writing and reading binary data, namely, a photo graph, it
is
> typically at about 35- 50 k each binary file (it is a jpeg image). this in
> term will make all my other data type not significant by comparison
Well, still basically the same. Figure out how often you'll read/write
those images and calculate from there.
BTW, many people prefer to store images in the file system, not the DB.
There's arguments either way.
> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
message
> news:uw1ZHaylGHA.3752@.TK2MSFTNGP02.phx.gbl...
> >
> > "George Kwong" <geokwo@.Lexingtontech.com> wrote in message
> > news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> >> We developed the applcation under VB, we are trying to bid on a
> >> customer's
> >> job. is there a way to do some test to find out the resource usage?
> >>
> >
> > Yes. MS Press had a book on this for SQL 2000 and I assume there is one
> > for
> > SQL 2005.
> >
> >> No, we use very minimum transaction controls. transaction are relative
> >> small, we do both read and writes.
> >>
> >
> > Well, fisrt pass, figure, "how many bytes will be read and written" for
> > each
> > transaction.
> >
> > How many transactions/sec do you need to cover?
> >
> > Things like indices may greatly impact that. As will caching.
> >
> > But first pass, it can give you a sense of stuff like disk I/o which is
> > generally the slowest part of a system.
> >
> > If you're reading/writing say 100 bytes/transaction and doing 100/sec,
> > well
> > you need 10,000 byte throughput on your disks.
> >
> > This ain't much.
> >
> > If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
> > another kettle of fish.
> >
> >
> >> thanks.
> >>
> >>
> >> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> >> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
> >> > You will probably get very little, except it depends on the
> >> > transaction,
> >> > are
> >> > they reads, or writes? .. Do they use transaction control or not, how
> > long
> >> > are the transactions, etc.
> >> >
> >> > Other than that.
> >> >
> >> > SQL loves memory.
> >> > More processors are better (Generally even if they are slower) than
> > fewer
> >> > faster processors.
> >> > Multi-core processors are good
> >> > More on-board cache is good.
> >> > Keep your transaction logs mirrored on different drives than your
data
> >> > Configure disk not only for space but for throughput - you might need
> > more
> >> > disk heads to carry the volume, even if you have enough space with
> >> > fewer
> >> > drives.
> >> >
> >> > Just some general guidelines.
> >> > --
> >> > Wayne Snyder MCDBA, SQL Server MVP
> >> > Mariner, Charlotte, NC
> >> >
> >> > I support the Professional Association for SQL Server ( PASS) and
it''s
> >> > community of SQL Professionals.
> >> >
> >> >
> >> > "George Kwong" wrote:
> >> >
> >> >> I am working on a RFI for a SQL Server 2000 database apllication, I
am
> >> >> looking for some general answer the question below:
> >> >>
> >> >> Capacity and resource planning for SQL Server 2000
> >> >>
> >> >> 1) resource requirements for 500, 1000, 2000 concurrent users
> > (database,
> >> >> memory, CPU, etc.)
> >> >> 2) deployment requirements for 500, 1000, 2000 concurrent users
> >> >> (server
> >> >> configuration, architecture model, etc.)
> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >
> >
>

No comments:

Post a Comment