Showing posts with label planning. Show all posts
Showing posts with label planning. Show all posts

Thursday, February 16, 2012

capacity planning.

hi
I want to start with a new project and need to do Database sizing.,capacity planning,server requirements..etc for sql 2k
can anyone help me in getting documents/material for this.
regard
sunnyYou can start with the capacity planning chapter in the SQL operations
guide:
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/operate/opsguide/sqlops6.asp
--
Jacco Schalkwijk
SQL Server MVP
"sunny" <anonymous@.discussions.microsoft.com> wrote in message
news:BD7035EB-33DB-4302-8EEA-9274F6E0FAAD@.microsoft.com...
> hi,
> I want to start with a new project and need to do Database
sizing.,capacity planning,server requirements..etc for sql 2k.
> can anyone help me in getting documents/material for this..
> regards
> sunny

capacity planning.

hi,
I want to start with a new project and need to do Database sizing.,capacity
planning,server requirements..etc for sql 2k.
can anyone help me in getting documents/material for this..
regards
sunnyYou can start with the capacity planning chapter in the SQL operations
guide:
http://www.microsoft.com/technet/tr...ide/sqlops6.asp
Jacco Schalkwijk
SQL Server MVP
"sunny" <anonymous@.discussions.microsoft.com> wrote in message
news:BD7035EB-33DB-4302-8EEA-9274F6E0FAAD@.microsoft.com...
> hi,
> I want to start with a new project and need to do Database
sizing.,capacity planning,server requirements..etc for sql 2k.
> can anyone help me in getting documents/material for this..
> regards
> sunny

Capacity planning question

You will probably get very little, except it depends on the transaction, are
they reads, or writes? .. Do they use transaction control or not, how long
are the transactions, etc.
Other than that.
SQL loves memory.
More processors are better (Generally even if they are slower) than fewer
faster processors.
Multi-core processors are good
More on-board cache is good.
Keep your transaction logs mirrored on different drives than your data
Configure disk not only for space but for throughput - you might need more
disk heads to carry the volume, even if you have enough space with fewer
drives.
Just some general guidelines.
--
Wayne Snyder MCDBA, SQL Server MVP
Mariner, Charlotte, NC
I support the Professional Association for SQL Server ( PASS) and it''s
community of SQL Professionals.
"George Kwong" wrote:

> I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>I am working on a RFI for a SQL Server 2000 database apllication, I am
looking for some general answer the question below:
Capacity and resource planning for SQL Server 2000
1) resource requirements for 500, 1000, 2000 concurrent users (database,
memory, CPU, etc.)
2) deployment requirements for 500, 1000, 2000 concurrent users (server
configuration, architecture model, etc.)|||You will probably get very little, except it depends on the transaction, are
they reads, or writes? .. Do they use transaction control or not, how long
are the transactions, etc.
Other than that.
SQL loves memory.
More processors are better (Generally even if they are slower) than fewer
faster processors.
Multi-core processors are good
More on-board cache is good.
Keep your transaction logs mirrored on different drives than your data
Configure disk not only for space but for throughput - you might need more
disk heads to carry the volume, even if you have enough space with fewer
drives.
Just some general guidelines.
--
Wayne Snyder MCDBA, SQL Server MVP
Mariner, Charlotte, NC
I support the Professional Association for SQL Server ( PASS) and it''s
community of SQL Professionals.
"George Kwong" wrote:

> I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||I will add, that for an installation with those projected sizes and issues,
if you do not bring in someone with adequate experience to assist in the
design, planning, and deployment, you will be making a major mistake.
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:O15VKVplGHA.3816@.TK2MSFTNGP02.phx.gbl...
>I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||We developed the applcation under VB, we are trying to bid on a customer's
job. is there a way to do some test to find out the resource usage?
No, we use very minimum transaction controls. transaction are relative
small, we do both read and writes.
thanks.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...[vbcol=seagreen]
> You will probably get very little, except it depends on the transaction,
> are
> they reads, or writes? .. Do they use transaction control or not, how long
> are the transactions, etc.
> Other than that.
> SQL loves memory.
> More processors are better (Generally even if they are slower) than fewer
> faster processors.
> Multi-core processors are good
> More on-board cache is good.
> Keep your transaction logs mirrored on different drives than your data
> Configure disk not only for space but for throughput - you might need more
> disk heads to carry the volume, even if you have enough space with fewer
> drives.
> Just some general guidelines.
> --
> Wayne Snyder MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> I support the Professional Association for SQL Server ( PASS) and it''s
> community of SQL Professionals.
>
> "George Kwong" wrote:
>|||I will add, that for an installation with those projected sizes and issues,
if you do not bring in someone with adequate experience to assist in the
design, planning, and deployment, you will be making a major mistake.
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:O15VKVplGHA.3816@.TK2MSFTNGP02.phx.gbl...
>I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||We developed the applcation under VB, we are trying to bid on a customer's
job. is there a way to do some test to find out the resource usage?
No, we use very minimum transaction controls. transaction are relative
small, we do both read and writes.
thanks.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...[vbcol=seagreen]
> You will probably get very little, except it depends on the transaction,
> are
> they reads, or writes? .. Do they use transaction control or not, how long
> are the transactions, etc.
> Other than that.
> SQL loves memory.
> More processors are better (Generally even if they are slower) than fewer
> faster processors.
> Multi-core processors are good
> More on-board cache is good.
> Keep your transaction logs mirrored on different drives than your data
> Configure disk not only for space but for throughput - you might need more
> disk heads to carry the volume, even if you have enough space with fewer
> drives.
> Just some general guidelines.
> --
> Wayne Snyder MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> I support the Professional Association for SQL Server ( PASS) and it''s
> community of SQL Professionals.
>
> "George Kwong" wrote:
>|||Hi George
Are you able to benchmark other customers' installations of your application
& project the performance characteristics from those installations against
the one you're bidding on?
I'd be tracking various perfmon counters & SQL diagnostics for this,
including at least:
Perfmon:
SQLBufferManager counter object, especially Buffer Page Life Expectancy to
determine memory characteristics
CPU Utilisation - collect system wide counter & also the sqlservr process'
CPU utilisation counter
Physical & Logical disk counters - expecially disk bytes read / write p/sec
& disk queues
There are other useful counters, but these are fundamental to pulling
together an informative picture on how your existing installations are
operating under specific hardware specs.
I'd also be taking a close look at how SQL Server is using memory
internally, using dbcc memorystatus to ensure you understand how your
system's using memory.
Performing some SQL Traces might also help you to ensure your application is
well tuned, which is important when drawing benchmark conclusions.
HTH
Regards,
Greg Linwood
SQL Server MVP
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>|||Hi George
Are you able to benchmark other customers' installations of your application
& project the performance characteristics from those installations against
the one you're bidding on?
I'd be tracking various perfmon counters & SQL diagnostics for this,
including at least:
Perfmon:
SQLBufferManager counter object, especially Buffer Page Life Expectancy to
determine memory characteristics
CPU Utilisation - collect system wide counter & also the sqlservr process'
CPU utilisation counter
Physical & Logical disk counters - expecially disk bytes read / write p/sec
& disk queues
There are other useful counters, but these are fundamental to pulling
together an informative picture on how your existing installations are
operating under specific hardware specs.
I'd also be taking a close look at how SQL Server is using memory
internally, using dbcc memorystatus to ensure you understand how your
system's using memory.
Performing some SQL Traces might also help you to ensure your application is
well tuned, which is important when drawing benchmark conclusions.
HTH
Regards,
Greg Linwood
SQL Server MVP
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>|||"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
>
Yes. MS Press had a book on this for SQL 2000 and I assume there is one for
SQL 2005.

> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
>
Well, fisrt pass, figure, "how many bytes will be read and written" for each
transaction.
How many transactions/sec do you need to cover?
Things like indices may greatly impact that. As will caching.
But first pass, it can give you a sense of stuff like disk I/o which is
generally the slowest part of a system.
If you're reading/writing say 100 bytes/transaction and doing 100/sec, well
you need 10,000 byte throughput on your disks.
This ain't much.
If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
another kettle of fish.

> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
long[vbcol=seagreen]
fewer[vbcol=seagreen]
more[vbcol=seagreen]
(database,[vbcol=seagreen]
>

Capacity planning question

I am working on a RFI for a SQL Server 2000 database apllication, I am
looking for some general answer the question below:
Capacity and resource planning for SQL Server 2000
1) resource requirements for 500, 1000, 2000 concurrent users (database,
memory, CPU, etc.)
2) deployment requirements for 500, 1000, 2000 concurrent users (server
configuration, architecture model, etc.)You will probably get very little, except it depends on the transaction, are
they reads, or writes? .. Do they use transaction control or not, how long
are the transactions, etc.
Other than that.
SQL loves memory.
More processors are better (Generally even if they are slower) than fewer
faster processors.
Multi-core processors are good
More on-board cache is good.
Keep your transaction logs mirrored on different drives than your data
Configure disk not only for space but for throughput - you might need more
disk heads to carry the volume, even if you have enough space with fewer
drives.
Just some general guidelines.
--
Wayne Snyder MCDBA, SQL Server MVP
Mariner, Charlotte, NC
I support the Professional Association for SQL Server ( PASS) and it''s
community of SQL Professionals.
"George Kwong" wrote:
> I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||I will add, that for an installation with those projected sizes and issues,
if you do not bring in someone with adequate experience to assist in the
design, planning, and deployment, you will be making a major mistake.
--
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another Certification Exam
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:O15VKVplGHA.3816@.TK2MSFTNGP02.phx.gbl...
>I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||We developed the applcation under VB, we are trying to bid on a customer's
job. is there a way to do some test to find out the resource usage?
No, we use very minimum transaction controls. transaction are relative
small, we do both read and writes.
thanks.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
> You will probably get very little, except it depends on the transaction,
> are
> they reads, or writes? .. Do they use transaction control or not, how long
> are the transactions, etc.
> Other than that.
> SQL loves memory.
> More processors are better (Generally even if they are slower) than fewer
> faster processors.
> Multi-core processors are good
> More on-board cache is good.
> Keep your transaction logs mirrored on different drives than your data
> Configure disk not only for space but for throughput - you might need more
> disk heads to carry the volume, even if you have enough space with fewer
> drives.
> Just some general guidelines.
> --
> Wayne Snyder MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> I support the Professional Association for SQL Server ( PASS) and it''s
> community of SQL Professionals.
>
> "George Kwong" wrote:
>> I am working on a RFI for a SQL Server 2000 database apllication, I am
>> looking for some general answer the question below:
>> Capacity and resource planning for SQL Server 2000
>> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
>> memory, CPU, etc.)
>> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
>> configuration, architecture model, etc.)
>>
>>|||Hi George
Are you able to benchmark other customers' installations of your application
& project the performance characteristics from those installations against
the one you're bidding on?
I'd be tracking various perfmon counters & SQL diagnostics for this,
including at least:
Perfmon:
SQLBufferManager counter object, especially Buffer Page Life Expectancy to
determine memory characteristics
CPU Utilisation - collect system wide counter & also the sqlservr process'
CPU utilisation counter
Physical & Logical disk counters - expecially disk bytes read / write p/sec
& disk queues
There are other useful counters, but these are fundamental to pulling
together an informative picture on how your existing installations are
operating under specific hardware specs.
I'd also be taking a close look at how SQL Server is using memory
internally, using dbcc memorystatus to ensure you understand how your
system's using memory.
Performing some SQL Traces might also help you to ensure your application is
well tuned, which is important when drawing benchmark conclusions.
HTH
Regards,
Greg Linwood
SQL Server MVP
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>> You will probably get very little, except it depends on the transaction,
>> are
>> they reads, or writes? .. Do they use transaction control or not, how
>> long
>> are the transactions, etc.
>> Other than that.
>> SQL loves memory.
>> More processors are better (Generally even if they are slower) than fewer
>> faster processors.
>> Multi-core processors are good
>> More on-board cache is good.
>> Keep your transaction logs mirrored on different drives than your data
>> Configure disk not only for space but for throughput - you might need
>> more
>> disk heads to carry the volume, even if you have enough space with fewer
>> drives.
>> Just some general guidelines.
>> --
>> Wayne Snyder MCDBA, SQL Server MVP
>> Mariner, Charlotte, NC
>> I support the Professional Association for SQL Server ( PASS) and it''s
>> community of SQL Professionals.
>>
>> "George Kwong" wrote:
>> I am working on a RFI for a SQL Server 2000 database apllication, I am
>> looking for some general answer the question below:
>> Capacity and resource planning for SQL Server 2000
>> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
>> memory, CPU, etc.)
>> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
>> configuration, architecture model, etc.)
>>
>>
>|||"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
>
Yes. MS Press had a book on this for SQL 2000 and I assume there is one for
SQL 2005.
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
>
Well, fisrt pass, figure, "how many bytes will be read and written" for each
transaction.
How many transactions/sec do you need to cover?
Things like indices may greatly impact that. As will caching.
But first pass, it can give you a sense of stuff like disk I/o which is
generally the slowest part of a system.
If you're reading/writing say 100 bytes/transaction and doing 100/sec, well
you need 10,000 byte throughput on your disks.
This ain't much.
If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
another kettle of fish.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
> > You will probably get very little, except it depends on the transaction,
> > are
> > they reads, or writes? .. Do they use transaction control or not, how
long
> > are the transactions, etc.
> >
> > Other than that.
> >
> > SQL loves memory.
> > More processors are better (Generally even if they are slower) than
fewer
> > faster processors.
> > Multi-core processors are good
> > More on-board cache is good.
> > Keep your transaction logs mirrored on different drives than your data
> > Configure disk not only for space but for throughput - you might need
more
> > disk heads to carry the volume, even if you have enough space with fewer
> > drives.
> >
> > Just some general guidelines.
> > --
> > Wayne Snyder MCDBA, SQL Server MVP
> > Mariner, Charlotte, NC
> >
> > I support the Professional Association for SQL Server ( PASS) and it''s
> > community of SQL Professionals.
> >
> >
> > "George Kwong" wrote:
> >
> >> I am working on a RFI for a SQL Server 2000 database apllication, I am
> >> looking for some general answer the question below:
> >>
> >> Capacity and resource planning for SQL Server 2000
> >>
> >> 1) resource requirements for 500, 1000, 2000 concurrent users
(database,
> >> memory, CPU, etc.)
> >> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> >> configuration, architecture model, etc.)
> >>
> >>
> >>
> >>
>|||it is actually the nature of my program worrys me. because, my program does
not have a complex transaction requirement, but the most significant part of
my program is writing and reading binary data, namely, a photo graph, it is
typically at about 35- 50 k each binary file (it is a jpeg image). this in
term will make all my other data type not significant by comparison
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:uw1ZHaylGHA.3752@.TK2MSFTNGP02.phx.gbl...
> "George Kwong" <geokwo@.Lexingtontech.com> wrote in message
> news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
>> We developed the applcation under VB, we are trying to bid on a
>> customer's
>> job. is there a way to do some test to find out the resource usage?
> Yes. MS Press had a book on this for SQL 2000 and I assume there is one
> for
> SQL 2005.
>> No, we use very minimum transaction controls. transaction are relative
>> small, we do both read and writes.
> Well, fisrt pass, figure, "how many bytes will be read and written" for
> each
> transaction.
> How many transactions/sec do you need to cover?
> Things like indices may greatly impact that. As will caching.
> But first pass, it can give you a sense of stuff like disk I/o which is
> generally the slowest part of a system.
> If you're reading/writing say 100 bytes/transaction and doing 100/sec,
> well
> you need 10,000 byte throughput on your disks.
> This ain't much.
> If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
> another kettle of fish.
>
>> thanks.
>>
>> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
>> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>> > You will probably get very little, except it depends on the
>> > transaction,
>> > are
>> > they reads, or writes? .. Do they use transaction control or not, how
> long
>> > are the transactions, etc.
>> >
>> > Other than that.
>> >
>> > SQL loves memory.
>> > More processors are better (Generally even if they are slower) than
> fewer
>> > faster processors.
>> > Multi-core processors are good
>> > More on-board cache is good.
>> > Keep your transaction logs mirrored on different drives than your data
>> > Configure disk not only for space but for throughput - you might need
> more
>> > disk heads to carry the volume, even if you have enough space with
>> > fewer
>> > drives.
>> >
>> > Just some general guidelines.
>> > --
>> > Wayne Snyder MCDBA, SQL Server MVP
>> > Mariner, Charlotte, NC
>> >
>> > I support the Professional Association for SQL Server ( PASS) and it''s
>> > community of SQL Professionals.
>> >
>> >
>> > "George Kwong" wrote:
>> >
>> >> I am working on a RFI for a SQL Server 2000 database apllication, I am
>> >> looking for some general answer the question below:
>> >>
>> >> Capacity and resource planning for SQL Server 2000
>> >>
>> >> 1) resource requirements for 500, 1000, 2000 concurrent users
> (database,
>> >> memory, CPU, etc.)
>> >> 2) deployment requirements for 500, 1000, 2000 concurrent users
>> >> (server
>> >> configuration, architecture model, etc.)
>> >>
>> >>
>> >>
>> >>
>>
>|||"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:%23oAEynOmGHA.492@.TK2MSFTNGP05.phx.gbl...
> it is actually the nature of my program worrys me. because, my program
does
> not have a complex transaction requirement, but the most significant part
of
> my program is writing and reading binary data, namely, a photo graph, it
is
> typically at about 35- 50 k each binary file (it is a jpeg image). this in
> term will make all my other data type not significant by comparison
Well, still basically the same. Figure out how often you'll read/write
those images and calculate from there.
BTW, many people prefer to store images in the file system, not the DB.
There's arguments either way.
> "Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in
message
> news:uw1ZHaylGHA.3752@.TK2MSFTNGP02.phx.gbl...
> >
> > "George Kwong" <geokwo@.Lexingtontech.com> wrote in message
> > news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> >> We developed the applcation under VB, we are trying to bid on a
> >> customer's
> >> job. is there a way to do some test to find out the resource usage?
> >>
> >
> > Yes. MS Press had a book on this for SQL 2000 and I assume there is one
> > for
> > SQL 2005.
> >
> >> No, we use very minimum transaction controls. transaction are relative
> >> small, we do both read and writes.
> >>
> >
> > Well, fisrt pass, figure, "how many bytes will be read and written" for
> > each
> > transaction.
> >
> > How many transactions/sec do you need to cover?
> >
> > Things like indices may greatly impact that. As will caching.
> >
> > But first pass, it can give you a sense of stuff like disk I/o which is
> > generally the slowest part of a system.
> >
> > If you're reading/writing say 100 bytes/transaction and doing 100/sec,
> > well
> > you need 10,000 byte throughput on your disks.
> >
> > This ain't much.
> >
> > If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
> > another kettle of fish.
> >
> >
> >> thanks.
> >>
> >>
> >> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> >> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
> >> > You will probably get very little, except it depends on the
> >> > transaction,
> >> > are
> >> > they reads, or writes? .. Do they use transaction control or not, how
> > long
> >> > are the transactions, etc.
> >> >
> >> > Other than that.
> >> >
> >> > SQL loves memory.
> >> > More processors are better (Generally even if they are slower) than
> > fewer
> >> > faster processors.
> >> > Multi-core processors are good
> >> > More on-board cache is good.
> >> > Keep your transaction logs mirrored on different drives than your
data
> >> > Configure disk not only for space but for throughput - you might need
> > more
> >> > disk heads to carry the volume, even if you have enough space with
> >> > fewer
> >> > drives.
> >> >
> >> > Just some general guidelines.
> >> > --
> >> > Wayne Snyder MCDBA, SQL Server MVP
> >> > Mariner, Charlotte, NC
> >> >
> >> > I support the Professional Association for SQL Server ( PASS) and
it''s
> >> > community of SQL Professionals.
> >> >
> >> >
> >> > "George Kwong" wrote:
> >> >
> >> >> I am working on a RFI for a SQL Server 2000 database apllication, I
am
> >> >> looking for some general answer the question below:
> >> >>
> >> >> Capacity and resource planning for SQL Server 2000
> >> >>
> >> >> 1) resource requirements for 500, 1000, 2000 concurrent users
> > (database,
> >> >> memory, CPU, etc.)
> >> >> 2) deployment requirements for 500, 1000, 2000 concurrent users
> >> >> (server
> >> >> configuration, architecture model, etc.)
> >> >>
> >> >>
> >> >>
> >> >>
> >>
> >>
> >
> >
>

Capacity Planning for OLTP Database Server

Is there any decent documentation for estimating storage, memory, and CPU reqirements for new SQL Server databases?IMHO... nothing that is too great.
However, Compaq and Dell both have SQL sizing tools on thier www site that
you might want to review...
--
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Charlie Duffy" <charles.t.duffy@.verizon.com> wrote in message
news:90A54F85-2D52-457A-9E8A-95A3AD5B509A@.microsoft.com...
> Is there any decent documentation for estimating storage, memory, and CPU
reqirements for new SQL Server databases?
>

Capacity Planning

Hello
I am very new to sql server, my experience is as a Progrss DBA. I want to
start monitoring the database growth on both all tables and database files
etc within SQL.
In a Progress Environment I could run table analysis job to display each
table, number of records, size/blocks used, min record size, max record site,
and fragmentation etc.
How can I get similar stats from within SQL?
Many thanks for your help.
JasonHi
SQL Server Profiler is your friend.
http://www.sql-server-performance.com/sql_server_profiler_tips.asp
"new_sql_dba" <newsqldba@.discussions.microsoft.com> wrote in message
news:12E4B5A8-A3E6-49C9-9164-6061109E37C4@.microsoft.com...
> Hello
> I am very new to sql server, my experience is as a Progrss DBA. I want to
> start monitoring the database growth on both all tables and database files
> etc within SQL.
> In a Progress Environment I could run table analysis job to display each
> table, number of records, size/blocks used, min record size, max record
> site,
> and fragmentation etc.
> How can I get similar stats from within SQL?
> Many thanks for your help.
> Jason
>
>
>

Capacity Planning

I would like to know what is Capacity Planning for SQL
Server 2000 ? Is there any reference site ?
ThanksCheck out the following for SQL Server 2000.
http://www.microsoft.com/sql/techinfo/administration/2000/scalability.asp
http://www.microsoft.com/sql/techinfo/planning/SQLReskChooseEd.asp
Chris Skorlinski
Microsoft SQL Server Support
Please reply directly to the thread with any updates.
This posting is provided "as is" with no warranties and confers no rights.

Capacity Planning

We are going to install a call centre application.
According to end user, there will be around 500 request
has to be inputted into the system.
We will use SQL Server 2000 as the DB. We would like to
know what factors we have to consider - Like recovery
model, database maintenance plan, fill factors ? Is it
necessary for us to archive some old data to an archive
datbase ?
Thanks"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:1cb701c4b569$d40000a0$a401280a@.phx.gbl...
> We are going to install a call centre application.
> According to end user, there will be around 500 request
> has to be inputted into the system.
>
500 requests over what time period?
> We will use SQL Server 2000 as the DB. We would like to
> know what factors we have to consider - Like recovery
> model, database maintenance plan, fill factors ? Is it
> necessary for us to archive some old data to an archive
> datbase ?
>
Those are really business decisions.
i.e. if you need to run 24x7 vs 9-5x5, your decisions will be different.
If you can do with downtime, you may go with a different decisions on
architecture.
As for archiving, again, that's a business decision. Do you want to archive
data or not?
> Thanks|||Dear Greg,
It should be 500 requests between 9:00am to 5:00pm from
Monday to Friday.
We will make full database backup daily. What is the
difference between full database backup and archive then ?
Thanks|||"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:0fba01c4b574$0e2f4830$a501280a@.phx.gbl...
> Dear Greg,
> It should be 500 requests between 9:00am to 5:00pm from
> Monday to Friday.
500 a day? That's about one a minute. You can run this thing on a desktop
machine.
> We will make full database backup daily. What is the
> difference between full database backup and archive then ?
Generally a backup is for disaster recovery. An archive is for storing data
for later analysis or for other reasons.
For example, I keep backups of only a few days (my databases generally have
enough churn that in a few days the bulk of the data has changed anyway.)
But there's certain data I archive to tape (in a different schema etc) that
I may keep for much longer.
Now, as for once a day backups, that may or may not work. Your database
sounds like it will probably be fairly small to start, so recovery time will
be about the same as backup. i.e. if it takes 10 minutes to backup, it'll
take about 10 minutes to restore. Plus any time to fix up minor issues.
However, let's say you start a backup at 5:01 PM.
What happens if your DB crashes at 5:00 PM. Can you afford to lose a day's
worth of data?
> Thanks
>

capacity planning

Hi all
how can we know how much IO/sec server is using..
second wot is allocated cache memory or how can we configure cache memory of sqlserver..or cache area.
i've to do capacity planning of sqlserver..can anyone help me out in this regard.
Regards"sanjay" <anonymous@.discussions.microsoft.com> wrote in message
news:02D6A931-8D1F-4A59-B0FD-8E5C1D0C540A@.microsoft.com...
> Hi all
> how can we know how much IO/sec server is using..
perfmon will give you IO/Sec
> second wot is allocated cache memory or how can we configure cache memory
of sqlserver..or cache area.
Perfmon again has counters for the memory used by SQL server. If you mean
buffer cache then pretty much the only thing you can do configuration wise
is limit the size to a fixed range - the default is to keep consuming memory
until all RAM is utilized by SQL Server.
> i've to do capacity planning of sqlserver..can anyone help me out in this
regard.
> Regards
>
Niall Litchfield
Oracle DBA
Audit Commission UK

Capacity Planning

Dear All
My company has asked me to come up with the amount of
space a new database will use based upon X number of
records in tables.
Is there some sort of recognised matrix I can follow, or
will I have to wing it based upon my own interpretation of
the tables and relationships ?
Thanks
PeterThis information is in SQL Server 2000 Books Online. Look up the chapter:
"Estimating the size of a database"
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:f97601c3f222$4697f700$a001280a@.phx.gbl...
Dear All
My company has asked me to come up with the amount of
space a new database will use based upon X number of
records in tables.
Is there some sort of recognised matrix I can follow, or
will I have to wing it based upon my own interpretation of
the tables and relationships ?
Thanks
Peter|||Thank you
Peter
>--Original Message--
>This information is in SQL Server 2000 Books Online. Look
up the chapter:
>"Estimating the size of a database"
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Peter" <anonymous@.discussions.microsoft.com> wrote in
message
>news:f97601c3f222$4697f700$a001280a@.phx.gbl...
>Dear All
>My company has asked me to come up with the amount of
>space a new database will use based upon X number of
>records in tables.
>Is there some sort of recognised matrix I can follow, or
>will I have to wing it based upon my own interpretation of
>the tables and relationships ?
>Thanks
>Peter
>
>.
>

Capacity Planning

We are going to install a call centre application.
According to end user, there will be around 500 request
has to be inputted into the system.
We will use SQL Server 2000 as the DB. We would like to
know what factors we have to consider - Like recovery
model, database maintenance plan, fill factors ? Is it
necessary for us to archive some old data to an archive
datbase ?
Thanks"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:1cb701c4b569$d40000a0$a401280a@.phx.gbl...
> We are going to install a call centre application.
> According to end user, there will be around 500 request
> has to be inputted into the system.
>
500 requests over what time period?

> We will use SQL Server 2000 as the DB. We would like to
> know what factors we have to consider - Like recovery
> model, database maintenance plan, fill factors ? Is it
> necessary for us to archive some old data to an archive
> datbase ?
>
Those are really business decisions.
i.e. if you need to run 24x7 vs 9-5x5, your decisions will be different.
If you can do with downtime, you may go with a different decisions on
architecture.
As for archiving, again, that's a business decision. Do you want to archive
data or not?

> Thanks|||Dear Greg,
It should be 500 requests between 9:00am to 5:00pm from
Monday to Friday.
We will make full database backup daily. What is the
difference between full database backup and archive then ?
Thanks|||"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:0fba01c4b574$0e2f4830$a501280a@.phx.gbl...
> Dear Greg,
> It should be 500 requests between 9:00am to 5:00pm from
> Monday to Friday.
500 a day? That's about one a minute. You can run this thing on a desktop
machine.

> We will make full database backup daily. What is the
> difference between full database backup and archive then ?
Generally a backup is for disaster recovery. An archive is for storing data
for later analysis or for other reasons.
For example, I keep backups of only a few days (my databases generally have
enough churn that in a few days the bulk of the data has changed anyway.)
But there's certain data I archive to tape (in a different schema etc) that
I may keep for much longer.
Now, as for once a day backups, that may or may not work. Your database
sounds like it will probably be fairly small to start, so recovery time will
be about the same as backup. i.e. if it takes 10 minutes to backup, it'll
take about 10 minutes to restore. Plus any time to fix up minor issues.
However, let's say you start a backup at 5:01 PM.
What happens if your DB crashes at 5:00 PM. Can you afford to lose a day's
worth of data?

> Thanks
>

Capacity Planning

Dear All
My company has asked me to come up with the amount of
space a new database will use based upon X number of
records in tables.
Is there some sort of recognised matrix I can follow, or
will I have to wing it based upon my own interpretation of
the tables and relationships ?
Thanks
PeterThis information is in SQL Server 2000 Books Online. Look up the chapter:
"Estimating the size of a database"
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:f97601c3f222$4697f700$a001280a@.phx.gbl...
Dear All
My company has asked me to come up with the amount of
space a new database will use based upon X number of
records in tables.
Is there some sort of recognised matrix I can follow, or
will I have to wing it based upon my own interpretation of
the tables and relationships ?
Thanks
Peter|||Thank you
Peter

>--Original Message--
>This information is in SQL Server 2000 Books Online. Look
up the chapter:
>"Estimating the size of a database"
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Peter" <anonymous@.discussions.microsoft.com> wrote in
message
>news:f97601c3f222$4697f700$a001280a@.phx.gbl...
>Dear All
>My company has asked me to come up with the amount of
>space a new database will use based upon X number of
>records in tables.
>Is there some sort of recognised matrix I can follow, or
>will I have to wing it based upon my own interpretation of
>the tables and relationships ?
>Thanks
>Peter
>
>.
>

Capacity Planning

Hello
I am very new to sql server, my experience is as a Progrss DBA. I want to
start monitoring the database growth on both all tables and database files
etc within SQL.
In a Progress Environment I could run table analysis job to display each
table, number of records, size/blocks used, min record size, max record site
,
and fragmentation etc.
How can I get similar stats from within SQL?
Many thanks for your help.
JasonHi
SQL Server Profiler is your friend.
http://www.sql-server-performance.c...ofiler_tips.asp
"new_sql_dba" <newsqldba@.discussions.microsoft.com> wrote in message
news:12E4B5A8-A3E6-49C9-9164-6061109E37C4@.microsoft.com...
> Hello
> I am very new to sql server, my experience is as a Progrss DBA. I want to
> start monitoring the database growth on both all tables and database files
> etc within SQL.
> In a Progress Environment I could run table analysis job to display each
> table, number of records, size/blocks used, min record size, max record
> site,
> and fragmentation etc.
> How can I get similar stats from within SQL?
> Many thanks for your help.
> Jason
>
>
>

Capacity Planning

I would like to know what is Capacity Planning for SQL
Server 2000 ? Is there any reference site ?
Thanks
Check out the following for SQL Server 2000.
http://www.microsoft.com/sql/techinf...calability.asp
http://www.microsoft.com/sql/techinf...skChooseEd.asp
Chris Skorlinski
Microsoft SQL Server Support
Please reply directly to the thread with any updates.
This posting is provided "as is" with no warranties and confers no rights.

Capacity Planning

Hello
I am very new to sql server, my experience is as a Progrss DBA. I want to
start monitoring the database growth on both all tables and database files
etc within SQL.
In a Progress Environment I could run table analysis job to display each
table, number of records, size/blocks used, min record size, max record site,
and fragmentation etc.
How can I get similar stats from within SQL?
Many thanks for your help.
Jason
Hi
SQL Server Profiler is your friend.
http://www.sql-server-performance.co...filer_tips.asp
"new_sql_dba" <newsqldba@.discussions.microsoft.com> wrote in message
news:12E4B5A8-A3E6-49C9-9164-6061109E37C4@.microsoft.com...
> Hello
> I am very new to sql server, my experience is as a Progrss DBA. I want to
> start monitoring the database growth on both all tables and database files
> etc within SQL.
> In a Progress Environment I could run table analysis job to display each
> table, number of records, size/blocks used, min record size, max record
> site,
> and fragmentation etc.
> How can I get similar stats from within SQL?
> Many thanks for your help.
> Jason
>
>
>

Capacity Planning

We are going to install a call centre application.
According to end user, there will be around 500 request
has to be inputted into the system.
We will use SQL Server 2000 as the DB. We would like to
know what factors we have to consider - Like recovery
model, database maintenance plan, fill factors ? Is it
necessary for us to archive some old data to an archive
datbase ?
Thanks
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:1cb701c4b569$d40000a0$a401280a@.phx.gbl...
> We are going to install a call centre application.
> According to end user, there will be around 500 request
> has to be inputted into the system.
>
500 requests over what time period?

> We will use SQL Server 2000 as the DB. We would like to
> know what factors we have to consider - Like recovery
> model, database maintenance plan, fill factors ? Is it
> necessary for us to archive some old data to an archive
> datbase ?
>
Those are really business decisions.
i.e. if you need to run 24x7 vs 9-5x5, your decisions will be different.
If you can do with downtime, you may go with a different decisions on
architecture.
As for archiving, again, that's a business decision. Do you want to archive
data or not?

> Thanks
|||Dear Greg,
It should be 500 requests between 9:00am to 5:00pm from
Monday to Friday.
We will make full database backup daily. What is the
difference between full database backup and archive then ?
Thanks
|||"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:0fba01c4b574$0e2f4830$a501280a@.phx.gbl...
> Dear Greg,
> It should be 500 requests between 9:00am to 5:00pm from
> Monday to Friday.
500 a day? That's about one a minute. You can run this thing on a desktop
machine.

> We will make full database backup daily. What is the
> difference between full database backup and archive then ?
Generally a backup is for disaster recovery. An archive is for storing data
for later analysis or for other reasons.
For example, I keep backups of only a few days (my databases generally have
enough churn that in a few days the bulk of the data has changed anyway.)
But there's certain data I archive to tape (in a different schema etc) that
I may keep for much longer.
Now, as for once a day backups, that may or may not work. Your database
sounds like it will probably be fairly small to start, so recovery time will
be about the same as backup. i.e. if it takes 10 minutes to backup, it'll
take about 10 minutes to restore. Plus any time to fix up minor issues.
However, let's say you start a backup at 5:01 PM.
What happens if your DB crashes at 5:00 PM. Can you afford to lose a day's
worth of data?

> Thanks
>