Showing posts with label transaction. Show all posts
Showing posts with label transaction. Show all posts

Sunday, February 19, 2012

Capture Execution Time Then Rollback Transaction

I would like to execute a stored procedure listed below then capture or
print out the execution time. Lastly rollback the transaction so that data
does not change in the database.
dbo.usp_Manual_toTraint 65823,'2004-08-01','TA_BB','2004-09-01'
Please help me with this procedure.
Thanks,Joe
See if this helps you
declare @.dt datetime
set @.dt =getdate()
begin tran
--do something here
rollback
select datediff(ss,@.dt,getdate())
"Joe K." <Joe K.@.discussions.microsoft.com> wrote in message
news:F6B145E9-8520-4DF3-95ED-1E4BC20E5DFE@.microsoft.com...
> I would like to execute a stored procedure listed below then capture or
> print out the execution time. Lastly rollback the transaction so that
> data
> does not change in the database.
> dbo.usp_Manual_toTraint 65823,'2004-08-01','TA_BB','2004-09-01'
> Please help me with this procedure.
> Thanks,|||Uri Dimant (urid@.iscar.co.il) writes:
> Joe
> See if this helps you
> declare @.dt datetime
> set @.dt =getdate()
> begin tran
> --do something here
> rollback
> select datediff(ss,@.dt,getdate())
That's not good. You need to do:
declare @.dt datetime
set @.dt =getdate()
begin tran
--do something here
select datediff(ss,@.dt,getdate())
rollback
ROLLBACK can take considerable time and should not be measured.
Also, in many situations, ms (milliseconds) is better than ss (seconds).
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Thursday, February 16, 2012

Capacity planning question

You will probably get very little, except it depends on the transaction, are
they reads, or writes? .. Do they use transaction control or not, how long
are the transactions, etc.
Other than that.
SQL loves memory.
More processors are better (Generally even if they are slower) than fewer
faster processors.
Multi-core processors are good
More on-board cache is good.
Keep your transaction logs mirrored on different drives than your data
Configure disk not only for space but for throughput - you might need more
disk heads to carry the volume, even if you have enough space with fewer
drives.
Just some general guidelines.
--
Wayne Snyder MCDBA, SQL Server MVP
Mariner, Charlotte, NC
I support the Professional Association for SQL Server ( PASS) and it''s
community of SQL Professionals.
"George Kwong" wrote:

> I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>I am working on a RFI for a SQL Server 2000 database apllication, I am
looking for some general answer the question below:
Capacity and resource planning for SQL Server 2000
1) resource requirements for 500, 1000, 2000 concurrent users (database,
memory, CPU, etc.)
2) deployment requirements for 500, 1000, 2000 concurrent users (server
configuration, architecture model, etc.)|||You will probably get very little, except it depends on the transaction, are
they reads, or writes? .. Do they use transaction control or not, how long
are the transactions, etc.
Other than that.
SQL loves memory.
More processors are better (Generally even if they are slower) than fewer
faster processors.
Multi-core processors are good
More on-board cache is good.
Keep your transaction logs mirrored on different drives than your data
Configure disk not only for space but for throughput - you might need more
disk heads to carry the volume, even if you have enough space with fewer
drives.
Just some general guidelines.
--
Wayne Snyder MCDBA, SQL Server MVP
Mariner, Charlotte, NC
I support the Professional Association for SQL Server ( PASS) and it''s
community of SQL Professionals.
"George Kwong" wrote:

> I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||I will add, that for an installation with those projected sizes and issues,
if you do not bring in someone with adequate experience to assist in the
design, planning, and deployment, you will be making a major mistake.
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:O15VKVplGHA.3816@.TK2MSFTNGP02.phx.gbl...
>I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||We developed the applcation under VB, we are trying to bid on a customer's
job. is there a way to do some test to find out the resource usage?
No, we use very minimum transaction controls. transaction are relative
small, we do both read and writes.
thanks.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...[vbcol=seagreen]
> You will probably get very little, except it depends on the transaction,
> are
> they reads, or writes? .. Do they use transaction control or not, how long
> are the transactions, etc.
> Other than that.
> SQL loves memory.
> More processors are better (Generally even if they are slower) than fewer
> faster processors.
> Multi-core processors are good
> More on-board cache is good.
> Keep your transaction logs mirrored on different drives than your data
> Configure disk not only for space but for throughput - you might need more
> disk heads to carry the volume, even if you have enough space with fewer
> drives.
> Just some general guidelines.
> --
> Wayne Snyder MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> I support the Professional Association for SQL Server ( PASS) and it''s
> community of SQL Professionals.
>
> "George Kwong" wrote:
>|||I will add, that for an installation with those projected sizes and issues,
if you do not bring in someone with adequate experience to assist in the
design, planning, and deployment, you will be making a major mistake.
Arnie Rowland, YACE*
"To be successful, your heart must accompany your knowledge."
*Yet Another certification Exam
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:O15VKVplGHA.3816@.TK2MSFTNGP02.phx.gbl...
>I am working on a RFI for a SQL Server 2000 database apllication, I am
> looking for some general answer the question below:
> Capacity and resource planning for SQL Server 2000
> 1) resource requirements for 500, 1000, 2000 concurrent users (database,
> memory, CPU, etc.)
> 2) deployment requirements for 500, 1000, 2000 concurrent users (server
> configuration, architecture model, etc.)
>
>|||We developed the applcation under VB, we are trying to bid on a customer's
job. is there a way to do some test to find out the resource usage?
No, we use very minimum transaction controls. transaction are relative
small, we do both read and writes.
thanks.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...[vbcol=seagreen]
> You will probably get very little, except it depends on the transaction,
> are
> they reads, or writes? .. Do they use transaction control or not, how long
> are the transactions, etc.
> Other than that.
> SQL loves memory.
> More processors are better (Generally even if they are slower) than fewer
> faster processors.
> Multi-core processors are good
> More on-board cache is good.
> Keep your transaction logs mirrored on different drives than your data
> Configure disk not only for space but for throughput - you might need more
> disk heads to carry the volume, even if you have enough space with fewer
> drives.
> Just some general guidelines.
> --
> Wayne Snyder MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> I support the Professional Association for SQL Server ( PASS) and it''s
> community of SQL Professionals.
>
> "George Kwong" wrote:
>|||Hi George
Are you able to benchmark other customers' installations of your application
& project the performance characteristics from those installations against
the one you're bidding on?
I'd be tracking various perfmon counters & SQL diagnostics for this,
including at least:
Perfmon:
SQLBufferManager counter object, especially Buffer Page Life Expectancy to
determine memory characteristics
CPU Utilisation - collect system wide counter & also the sqlservr process'
CPU utilisation counter
Physical & Logical disk counters - expecially disk bytes read / write p/sec
& disk queues
There are other useful counters, but these are fundamental to pulling
together an informative picture on how your existing installations are
operating under specific hardware specs.
I'd also be taking a close look at how SQL Server is using memory
internally, using dbcc memorystatus to ensure you understand how your
system's using memory.
Performing some SQL Traces might also help you to ensure your application is
well tuned, which is important when drawing benchmark conclusions.
HTH
Regards,
Greg Linwood
SQL Server MVP
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>|||Hi George
Are you able to benchmark other customers' installations of your application
& project the performance characteristics from those installations against
the one you're bidding on?
I'd be tracking various perfmon counters & SQL diagnostics for this,
including at least:
Perfmon:
SQLBufferManager counter object, especially Buffer Page Life Expectancy to
determine memory characteristics
CPU Utilisation - collect system wide counter & also the sqlservr process'
CPU utilisation counter
Physical & Logical disk counters - expecially disk bytes read / write p/sec
& disk queues
There are other useful counters, but these are fundamental to pulling
together an informative picture on how your existing installations are
operating under specific hardware specs.
I'd also be taking a close look at how SQL Server is using memory
internally, using dbcc memorystatus to ensure you understand how your
system's using memory.
Performing some SQL Traces might also help you to ensure your application is
well tuned, which is important when drawing benchmark conclusions.
HTH
Regards,
Greg Linwood
SQL Server MVP
"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
>|||"George Kwong" <geokwo@.Lexingtontech.com> wrote in message
news:uagkjdtlGHA.4512@.TK2MSFTNGP04.phx.gbl...
> We developed the applcation under VB, we are trying to bid on a customer's
> job. is there a way to do some test to find out the resource usage?
>
Yes. MS Press had a book on this for SQL 2000 and I assume there is one for
SQL 2005.

> No, we use very minimum transaction controls. transaction are relative
> small, we do both read and writes.
>
Well, fisrt pass, figure, "how many bytes will be read and written" for each
transaction.
How many transactions/sec do you need to cover?
Things like indices may greatly impact that. As will caching.
But first pass, it can give you a sense of stuff like disk I/o which is
generally the slowest part of a system.
If you're reading/writing say 100 bytes/transaction and doing 100/sec, well
you need 10,000 byte throughput on your disks.
This ain't much.
If you're diong 1,000 bytes/transaction and doing 1,000sec, well that's
another kettle of fish.

> thanks.
>
> "Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
> news:10F4EC48-C9AD-45E3-938B-460A95708CB2@.microsoft.com...
long[vbcol=seagreen]
fewer[vbcol=seagreen]
more[vbcol=seagreen]
(database,[vbcol=seagreen]
>

Friday, February 10, 2012

Can't truncate a 99 GB transaction log need badly help

Hi,
I am having hard time trying to truncate a 99 GB transaction log for a
database.
I already tried
CHECKPOINT
DBCC SHRINKFILE (emptydb_Log, 15000)
DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
And I get this results:
DbId|FieldId|CurrentSize|MinimumSize|UsedPages|Est imatedPages
8 | 2 | 11601696| 63| 11601696|
56
From allocated 99 GB to this log in taskpad I can see that used space is
13642.14 MB however none of the above work, even a full backup with truncate
log option selected.
Maintenance plan also can't do the job also.
Can someone explain to me how I can safely get rid of these free 76996 GB
from log file.
I really appreciate your support, it is imperative to find a solution ASAP
as I run out of space.
Thank you,
Zorba
Hi,
try this
Backup log database name with truncate_only
dbcc shrinfile(logical name,size);
This will take lot of time as it is very big.
hope this help
from
doller
|||Hi,
try this
Backup log database name with truncate_only
dbcc shrinkfile(logical name,size); size is give at the time of
database creation
This will take lot of time as it is very big.
hope this help
from
doller
|||Do you have an open transaction? Try running DBCC OPENTRAN on that DB.
http://www.nigelrivett.net/Transacti...leGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
Andrew J. Kelly SQL MVP
"Zorba" <nospam@.nonexistent> wrote in message
news:OL0AXtctFHA.460@.TK2MSFTNGP15.phx.gbl...
> Hi,
> I am having hard time trying to truncate a 99 GB transaction log for a
> database.
> I already tried
> CHECKPOINT
> DBCC SHRINKFILE (emptydb_Log, 15000)
> DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
> And I get this results:
> DbId|FieldId|CurrentSize|MinimumSize|UsedPages|Est imatedPages
> 8 | 2 | 11601696| 63| 11601696| 56
> From allocated 99 GB to this log in taskpad I can see that used space is
> 13642.14 MB however none of the above work, even a full backup with
> truncate log option selected.
> Maintenance plan also can't do the job also.
> Can someone explain to me how I can safely get rid of these free 76996 GB
> from log file.
>
> I really appreciate your support, it is imperative to find a solution ASAP
> as I run out of space.
> Thank you,
> Zorba
>
>
|||Thank you all, for helping me.
Finally after 5 hours the only method that gave result was to backup only
the transaction log to a file and
after that:
CHECKPOINT
DBCC SHRINKFILE (emptydb_Log, NOTRUNCATE )
DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
After first attempt this had reduced the log size to 10.6 GB and after
second backup of transaction log with same process it went down to 60 MB.
Now I learned that I should keep my eyes more often on these automatically
scheduled maintenance plans :-).
Note: a full DB backup with truncate log option selected was not effective.
Thank you,
Zorba
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eo$HJlhtFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Do you have an open transaction? Try running DBCC OPENTRAN on that DB.
>
> http://www.nigelrivett.net/Transacti...leGrows_1.html Log File issues
> http://www.support.microsoft.com/?id=317375 Log File Grows too big
> http://www.support.microsoft.com/?id=110139 Log file filling up
> http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
> http://www.support.microsoft.com/?id=315512 Considerations for
> Autogrow and AutoShrink
> http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
> Server 7.0 Tran Log
> http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
> Server 2000 with DBCC SHRINKFILE
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> considerations
>
> --
> Andrew J. Kelly SQL MVP
>
> "Zorba" <nospam@.nonexistent> wrote in message
> news:OL0AXtctFHA.460@.TK2MSFTNGP15.phx.gbl...
>

Can't truncate a 99 GB transaction log need badly help

Hi,
I am having hard time trying to truncate a 99 GB transaction log for a
database.
I already tried
CHECKPOINT
DBCC SHRINKFILE (emptydb_Log, 15000)
DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
And I get this results:
DbId|FieldId|CurrentSize|MinimumSize|Use
dPages|EstimatedPages
8 | 2 | 11601696| 63| 11601696|
56
From allocated 99 GB to this log in taskpad I can see that used space is
13642.14 MB however none of the above work, even a full backup with truncate
log option selected.
Maintenance plan also can't do the job also.
Can someone explain to me how I can safely get rid of these free 76996 GB
from log file.
I really appreciate your support, it is imperative to find a solution ASAP
as I run out of space.
Thank you,
ZorbaHi,
try this
Backup log database name with truncate_only
dbcc shrinfile(logical name,size);
This will take lot of time as it is very big.
hope this help
from
doller|||Hi,
try this
Backup log database name with truncate_only
dbcc shrinkfile(logical name,size); size is give at the time of
database creation
This will take lot of time as it is very big.
hope this help
from
doller|||Do you have an open transaction? Try running DBCC OPENTRAN on that DB.
http://www.nigelrivett.net/Transact...ileGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
Andrew J. Kelly SQL MVP
"Zorba" <nospam@.nonexistent> wrote in message
news:OL0AXtctFHA.460@.TK2MSFTNGP15.phx.gbl...
> Hi,
> I am having hard time trying to truncate a 99 GB transaction log for a
> database.
> I already tried
> CHECKPOINT
> DBCC SHRINKFILE (emptydb_Log, 15000)
> DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
> And I get this results:
> DbId|FieldId|CurrentSize|MinimumSize|Use
dPages|EstimatedPages
> 8 | 2 | 11601696| 63| 11601696| 56
> From allocated 99 GB to this log in taskpad I can see that used space is
> 13642.14 MB however none of the above work, even a full backup with
> truncate log option selected.
> Maintenance plan also can't do the job also.
> Can someone explain to me how I can safely get rid of these free 76996 GB
> from log file.
>
> I really appreciate your support, it is imperative to find a solution ASAP
> as I run out of space.
> Thank you,
> Zorba
>
>|||Thank you all, for helping me.
Finally after 5 hours the only method that gave result was to backup only
the transaction log to a file and
after that:
CHECKPOINT
DBCC SHRINKFILE (emptydb_Log, NOTRUNCATE )
DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
After first attempt this had reduced the log size to 10.6 GB and after
second backup of transaction log with same process it went down to 60 MB.
Now I learned that I should keep my eyes more often on these automatically
scheduled maintenance plans :-).
Note: a full DB backup with truncate log option selected was not effective.
Thank you,
Zorba
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eo$HJlhtFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Do you have an open transaction? Try running DBCC OPENTRAN on that DB.
>
> http://www.nigelrivett.net/Transact...ileGrows_1.html Log File issues
> http://www.support.microsoft.com/?id=317375 Log File Grows too big
> http://www.support.microsoft.com/?id=110139 Log file filling up
> http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
> http://www.support.microsoft.com/?id=315512 Considerations for
> Autogrow and AutoShrink
> http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
> Server 7.0 Tran Log
> http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
> Server 2000 with DBCC SHRINKFILE
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> considerations
>
> --
> Andrew J. Kelly SQL MVP
>
> "Zorba" <nospam@.nonexistent> wrote in message
> news:OL0AXtctFHA.460@.TK2MSFTNGP15.phx.gbl...
>

Can't truncate a 99 GB transaction log need badly help

Hi,
I am having hard time trying to truncate a 99 GB transaction log for a
database.
I already tried
CHECKPOINT
DBCC SHRINKFILE (emptydb_Log, 15000)
DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
And I get this results:
DbId|FieldId|CurrentSize|MinimumSize|UsedPages|EstimatedPages
8 | 2 | 11601696| 63| 11601696|
56
From allocated 99 GB to this log in taskpad I can see that used space is
13642.14 MB however none of the above work, even a full backup with truncate
log option selected.
Maintenance plan also can't do the job also.
Can someone explain to me how I can safely get rid of these free 76996 GB
from log file.
I really appreciate your support, it is imperative to find a solution ASAP
as I run out of space.
Thank you,
ZorbaHi,
try this
Backup log database name with truncate_only
dbcc shrinfile(logical name,size);
This will take lot of time as it is very big.
hope this help
from
doller|||Hi,
try this
Backup log database name with truncate_only
dbcc shrinkfile(logical name,size); size is give at the time of
database creation
This will take lot of time as it is very big.
hope this help
from
doller|||Do you have an open transaction? Try running DBCC OPENTRAN on that DB.
http://www.nigelrivett.net/TransactionLogFileGrows_1.html Log File issues
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
considerations
Andrew J. Kelly SQL MVP
"Zorba" <nospam@.nonexistent> wrote in message
news:OL0AXtctFHA.460@.TK2MSFTNGP15.phx.gbl...
> Hi,
> I am having hard time trying to truncate a 99 GB transaction log for a
> database.
> I already tried
> CHECKPOINT
> DBCC SHRINKFILE (emptydb_Log, 15000)
> DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
> And I get this results:
> DbId|FieldId|CurrentSize|MinimumSize|UsedPages|EstimatedPages
> 8 | 2 | 11601696| 63| 11601696| 56
> From allocated 99 GB to this log in taskpad I can see that used space is
> 13642.14 MB however none of the above work, even a full backup with
> truncate log option selected.
> Maintenance plan also can't do the job also.
> Can someone explain to me how I can safely get rid of these free 76996 GB
> from log file.
>
> I really appreciate your support, it is imperative to find a solution ASAP
> as I run out of space.
> Thank you,
> Zorba
>
>|||Thank you all, for helping me.
Finally after 5 hours the only method that gave result was to backup only
the transaction log to a file and
after that:
CHECKPOINT
DBCC SHRINKFILE (emptydb_Log, NOTRUNCATE )
DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
After first attempt this had reduced the log size to 10.6 GB and after
second backup of transaction log with same process it went down to 60 MB.
Now I learned that I should keep my eyes more often on these automatically
scheduled maintenance plans :-).
Note: a full DB backup with truncate log option selected was not effective.
Thank you,
Zorba
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eo$HJlhtFHA.1028@.TK2MSFTNGP12.phx.gbl...
> Do you have an open transaction? Try running DBCC OPENTRAN on that DB.
>
> http://www.nigelrivett.net/TransactionLogFileGrows_1.html Log File issues
> http://www.support.microsoft.com/?id=317375 Log File Grows too big
> http://www.support.microsoft.com/?id=110139 Log file filling up
> http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
> http://www.support.microsoft.com/?id=315512 Considerations for
> Autogrow and AutoShrink
> http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
> Server 7.0 Tran Log
> http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
> Server 2000 with DBCC SHRINKFILE
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp Shrinking
> considerations
>
> --
> Andrew J. Kelly SQL MVP
>
> "Zorba" <nospam@.nonexistent> wrote in message
> news:OL0AXtctFHA.460@.TK2MSFTNGP15.phx.gbl...
>> Hi,
>> I am having hard time trying to truncate a 99 GB transaction log for a
>> database.
>> I already tried
>> CHECKPOINT
>> DBCC SHRINKFILE (emptydb_Log, 15000)
>> DBCC SHRINKFILE (emptydb_Log, TRUNCATEONLY )
>> And I get this results:
>> DbId|FieldId|CurrentSize|MinimumSize|UsedPages|EstimatedPages
>> 8 | 2 | 11601696| 63| 11601696| 56
>> From allocated 99 GB to this log in taskpad I can see that used space is
>> 13642.14 MB however none of the above work, even a full backup with
>> truncate log option selected.
>> Maintenance plan also can't do the job also.
>> Can someone explain to me how I can safely get rid of these free 76996 GB
>> from log file.
>>
>> I really appreciate your support, it is imperative to find a solution
>> ASAP as I run out of space.
>> Thank you,
>> Zorba
>>
>