12/16/2008
Front line dude: Acme has a drive space issue...can I shrink the log file?
Kevin3NF: Give it a shot (this is SOP for Acme on this drive)
FLD: Didn't work
Kevin3NF: Give me the ticket, I'll take a look.
Kevin3NF: Hey ACME developer...you have a 2 day old transaction taking up all your T-log space...
Acme: ok...let me truncate the log
Kevin3NF: WAIT A SECOND!
Acme: Did that help?
Kevin3NF: No. You just invalidated the T-log backup stream and started causing the backup to fail
Acme: Why?
Kevin3NF: Cuz that big open transaction is still there, and SQL Server thinks there is no full backup now.
Kevin3NF: Hello?
Kevin3NF: (Sees pictures of crickets in his Inbox)
Acme: (hours later): go ahead and run a full backup tomorrow afternoon
Kevin3NF: You know you don't have a valid backup to recover to, right?
Acme: Yes, but we can't backup now...
(tomorrow afternoon):
Acme: Hey...who told you to run a full backup?!?!?
Kevin3NF: You did...I have the email.
Acme: (complaint to Kevin3NF's bosses)
Time passes...repeat scenario.
Big transaction, full log file, fill drive, lather, rinse, repeat.
Customer: FAIL. >:(
Acme me owes ME some fries...but I get paid to deal with stuff like this every day :)
Kevin3NF
11/25/2008
SQL PASS Community Summit 2008
Wow. What a load of information! Almost an overload, but managed to keep my head above the water in almost every session.
The breakdown:
Day 1:
SQLCAT - Overall Lessons Learned from SQL 2008 Experiences
Quite a good bit of variety here, touching on:
Data Compression
Backup Compression
Performance Tuning (Extended Events)
SQL Server Reporting Services
Database Mirroring (compressed LogStream)
Encryption
Capitalizing on the SQL Server 2005 System Information
Way more info here than I was able to process...took a few notes to look up after Thanksgiving
Understanding and Troubleshooting Transactional Replication Performance
Lots of discussion on subscription streams/mutli-threading and read/write activity of the Log Reader and Distribution Agents. Got a few things to play with.
SQL Server 2008 Policy-Based Management - Sharon Dooley
Not a whole lot here I didn't pick up at the SQL 2008 launch in LA...still a very cool feature I won't get to use much in our environment.
Day 2:
Guiding your Query Plans (by Kalen Delaney)
Guiding query plans didn't actually enter the picture until late in the presentation, but it was fantastic nonetheless. Most of this session was about Hints...Table, Index, Option, etc. A BUNCH of stuff that never crossed my monitor before. 3 pages of notes worth. Being able to force a recompile of a portion of the overall query is really slick. And the very simple new feature of assigning a value in the declare:
DECLARE @MyID INT = 123
Smart Database Design - MVPs DrSQL and Paul Nielson
Well outside of what I do from day to day, but pretty cool anyway, since I was a developer many moons ago. Worth the lack of elbow room just to hear from DrSQL and Paul live...too bad my on call phone went a little nuts and I had to leave early
Advanced Troubleshooting with SQL Server Extended Events
So far over my head I'm still not sure what I learned...good thing I have notes
Business Continuity with Backup and Restore - Peter Ward
I could probably have taught this session. Should have been labeled 200 level, but still good solid information on backup/restore and some of the new (05/08) options (Online restore, Partial, COPY_ONLY, etc.). Thought about arguing a point with Peter, but decided the way he presented his wasn't likely to cause harm.
Day 3:
Data and Backup Compression Lessons Learned
Some fairly impressive numbers here...300GB DB compressed to 45GB, 50% time savings, etc.
Money-maker: use sp_estimate_data_compression to do custom analysis for customers considering whether to move to SQL compression. Assuming you have the install to do it on.
Collecting and Analyzing Performance Metrics in SS2005/8
Got here late, stuck in the back, couldn't see or hear very well. Saw a little perfmon on the screen I think...
Writing Technical Articles (Kathi Kellenberger)
The only Personal Development session I attended, since I was on the company nickel. Interesting stuff. I've thought about teaching and writing a SQL Basics series for new or accidental DBAs that really will never do more than backup/restore or set up Dev systems. Kathi Kellenberger did a fine presentation, including various places to publish to, and what they pay (if anything). I'm completely convinced to NEVER get in on a book project :)
End-to-end troubleshooting for SQL Server 2005
Kevin Kline rocks. Many others have blogged this presentation, so I won't repeat here.
11/17/2008
11/11/2008
A good friend of mine has a fairly simple community site, with 30K members, about 7K active.
SQL Server 2000, ASP Classic. DB and site on separate servers.
KerBam! SQL Injected last Monday. Major trashing of data.
Problem 2: No backup since October 2, as the SQL Agent password had changed so the Agent wasn't started.
Problem 3: Started the agent and it deleted the last full backup because it was older than 4 weeks.
SO: I get the call for help.
Time passes.
More time passes.
After a dozen uploads and downloads of .mdf/.ldf and backup files (from August), I am able to recover much of the data using the fantastic tools from Red-Gate software:
Log Rescue: Identified what got injected, into where and when
SQL Compare: enabled me to create a schema script to replace the relationships I had to remove to fix the data
SQL Data Compare: Helped me replace the trash data with what it looked like in August.
Also, a shout-out to Narayana Vyas Kondreddi for his Search and Replace code that at least got the bad URL out of the data we couldn't fix.
The site is back up in read-only, and my buddy is reviewing all the code one page at a time.
Lessons learned:
Validate your inputs!
Back up your data, and verify it!
Not only does this one get fries, he gets 10+ hours of recovery effort at no charge, just for having a really cool site I want to see come back up :)
Kevin3NF
9/17/2008
Too good to be true, but it is...
From: My Customer
Sent: Tuesday, September 16, 2008 9:37 AM
To: Kevin3NF
Cc: a bunch of people
Subject: Re: SQL Backup
Hi,
As per latest discussion with {Customer DBA} only one database will be backed up - "MyDatabase". Please exclude other three databases from backup process.
Differential backup should be from Monday [after taking of full backup] - Sunday.
-Customer
----- Original Message -----
From: Kevin3NF
To: My Customer
Sent: Tuesday, September 16, 2008 9:41 AM
Subject: RE: SQL Backup
Full backup at 12:15am and differential 15 minutes later at 12:30? We can do that…
----- Original Message -----
From: MyCustomer
Sent: Tuesday, September 16, 2008 9:43 AM
To: Kevin3NF
Subject: Re: SQL Backup
yes that's correct.
Apparently they anticipate a lot of traffic the first 15 minutes of Monday morning each week :)
Sigh.
Kevin3NF
8/07/2008
Now, to dig in and see what (if any) of the really cool new features moved to the standard version instead of Enterprise. Or even stuff from 2005 moving into standard...
Post a comment if you already have the list :)
OK...here we go:
Features Supported by the Editions of SQL Server 2008
http://msdn.microsoft.com/en-us/library/cc645993(SQL.100).aspx
Backup compression: Enterprise only. No thanks, I'd rather buy Litespeed or Red Gaet SQL Backup than pay this premium :(
Encryption: Enterprise only (TDE and Data Compression): Great feature, easier to use, worth the expense for some companies/industries
Resource Governor: EE only...cool feature, but I wonder how often it will get used in practice
Database Mirroring: Yes (safety full only) in Standard edition
Auditing: EE only. Someone is going to blow performance out the door by auditing everything...be judicious here folks!
Capacity Specifications for SQL 2008 objects:
http://msdn.microsoft.com/en-us/library/ms143432(SQL.100).aspx
Maximum Number of Processors Supported by the Editions of SQL Server 2008
http://msdn.microsoft.com/en-us/library/ms143760(SQL.100).aspx
A Processor is still defined as a socket here, so 4 quad cores gives you a SE SQL Server with 16 CPUs.
Memory Supported by the Editions of SQL Server 2008
http://msdn.microsoft.com/en-us/library/ms143685(SQL.100).aspx
Workgroup edition got a 1gb bump here
And my favorite feature thus far is still the very configurable installer. I install at least 1 SQL Server per week and despise having to physically move the system databases and set the default locations after he install for the user dbs. There are (too?) many options in this installer that were simply not available pre-2008. Rockin'
So, there's some links and my take on a few pieces.
Fries and a shake to MS for not slipping the date to Q4!
Kevin3NF
6/26/2008
Scenario:
Major website in the IT industry, very well known. Perf issue, most likely related to inefficient queries and/or indexes and stats. Site timeouts all over the place.
Sent customer a list of long running queries and got this:
{Customer} says they are in the process of getting rid of the site. He said use gum and bailing wire for the time being. Thanks for you hard work.
Sadly, I am out of gum and bailing wire. Will spit and duct tape do?
Simply amazing.
Kevin3NF
5/22/2008
Customer: Please restore my database
Kevin3NF: Sure thing...what happened?
Customer: SQL Injection...spam in my data
Kevin3NF: OK...have you plugged the holes in the application?
Customer: Not yet...but our site is down
Kevin3NF: Talk to you tomorrow.
Repeat ad nauseum...
You get no fries for bad coding practices. Matter of fact, you get drive-through service only. And your DB is now read-only!
:)
Kevin3NF
5/16/2008
I just watched part of the SSWUG show 106, where SQL Server MVP Stephen Wynkoop dissected the current SQL Injection attack that is propogating itself around the net. The 'cool' thing with this one is that instead of passing in SELECT, UPDATE, etc statements to your SQL Server, its doing a CAST of a binary string into an NVARCHAR 4000 data type. That binary string translates to an UPDATE statement that craps up all of your tables, as long as drive space is available.
So...if you are filtering on keywords, your filters will not work.
Move to stored procs...now.
I saw the tail end of this yesterday. The only reason my customer didn't suffer worse is that they had a database size cap of 150mb (shared SQL Server), and we started looking into the unexpected growth.
Take care out there!
Kevin3NF
3/03/2008
By comparison to the SQL 2005 launch, Los Angeles was a dud.
Too many people seemed to overwhelm the facilities. many of the sessions were packed to the gills, long lines for everything.
The SQL Server "Ask the experts" section in the Microsoft Pavillion was unstaffed, even after I found the guys that were supposed to be in it and showed them where it was.
LA Convention center stopped letting folks into the Partner pavillion after about 4, so I missed the other half of the vendors I had intended to visit.
Bah, humbug.
What's cool? (I can spin anything...). I did learn about a number of new features in SQL 2008 that I had previously not paid any attention to and the hands-on lab for Change Data Capture was a quick way to get up to speed on the idea/concepts of this so I can share with my customers.
Did you go? What was your experience?
Kevin3NF
2/26/2008
I'm headed out to LA today for the Windows/SQL/VS 2008 launch event, and have lots of tech stuff in my head to wonder about, but I keep amusing myself with the theme of this event:
Heros Happen Here. "Celebrating and Inspiring Customer Heroes"
I really don't know what to expect. Are we going to get to meet the guy that saved the company from impending doom at 3AM (again)? How about the SQL developer from Redmond that found a last minute bug in some cool new feature?
Or will there be speeches from real heroes, such as people who have saved lives, or invented new nedical technologies?
Worst thought: At the SQL 2005 launch ("Rock the Launch"), there were celebrity impersonators all over the place...Cher, Tina Turner, Britnney "Pre-crack-up" Spears...
Please don't tell me we are going to have hired actors walking around dressed like Police, soldiers, firemen and EMTs...
I'll report back what Microsoft calls a hero when/if I figure it out :)
If you're there, look for me....I'll be the middle-aged, balding guy with a gut, a goatee, a laptop and two phones. That oughta narrow it down for you!
Kevin3NF
1/31/2008
After 6 good years and 1 not so good, 3NF Consulting is shutting down. I want to thank all of my former clients, vendors and most of all the outstanding crew of Access, SQL Server and Windows guys I have farmed work out to over the years. We did some good work, saved a few necks and put out more than few fires along the way.
Despite the demise of the company, I'll still be doing some side work as Kevin3NF, the 1099 guy, on a 1-2 hour basis. No long-term stuff as I am quite content at my full-time gig.
Thanks for reading...
Kevin3NF
1/25/2008
I was asked to give a once over to a system not built by my team. SQL 2000 32-bit on Windows 2003 64 bit. 40GB RAM.
Min and Max set to 32gb, yet Perfmon is showing 40gb used by SQL Server. Hmm.
Freakshow noticed that the Min and Max Server memory settings seemed a little high at 33000000. 33000000/1024 = 31.4 Terabytes of RAM. Sweet.
Linchi Shea from the .server newsgroup noticed it as well.
We set it to a more conservative 36gb.
You want room for the O/S with that?
Kevin3NF
1/24/2008
Caller: My master database is 500mb, and I'm running low on drive space
Kevin3NF: 500mb is small...you're that low on space? What did you create in the master db?
Caller: Nothing
Kevin3NF: What about these three user tables named "Trace1, trace2 and trace3?"
Caller: Oh yeah...forgot about those. I'll delete them.
Kevin3NF: You want fries or a salad with that?
:)
If you are going to save your Profiler trace data to a table, please create a database named "Traces" and save it there...
Master database doesn't accrue much information day to day, so should almost never grow to more than 100mb or so. It is more metadata than anything else.
Kevin3NF
Good:
Reporting Services offers a "set it and forget it" option in the install, to pre-configure using default values instead of making you do it manually....don't recall if this is available in 2005. Also offers a Sharepoint specific config. This could save a lot of frustration for the small business user.
Grrrr...
By default, the "report stuff back to the mothership" options are both selected (error reports and feature usage). Hopefully, this is just for the CTP and will be unselected by default in the RTM version.
Kevin3NF
1/23/2008
I have just started experimenting with Katmai...thought I would post as I go :)
Mistake Number 1 - working in a VMWare Server environment on a 8 gb virtual disk...oops. The executable is 1gb, the extracted files 1.2, plus the bits laid down...chewed up all my space. Fortunately Win2K3 warned me before aborting the install. I had not selected all the features.
Cool thing - Tons of control over where the data and log files will go. This is particularly helpful to me as we build a lot of SQL Servers for our customers and almost always have enough volumes to spread the files out. You can choose locations for system, user data, user log, tempdb data, and tempdb log.
Cool thing2 - you can handle all of your service startup accounts in the install process and get very granular, or just choose one account and apply it across the board with a button-click.
Install is all I've done thus far...off to get the sample databases...
Kevin3NF
1/22/2008
What would you like to see covered in a future post in the SQL 101 (or 201) series? I'm full of ideas (among other things), but there's not much point in posting things ya'll aren't looking for...
If there's something that's always been sorta fuzzy, let me know and I'll do my best to translate it for you in plain 'ol English. Just post a comment and away we go.
Kevin3NF
1/17/2008
More translating of SQL stuff into less technical terms for the new folks...
SQL Server (and most other database systems) offer the option of using indexes on your data to help queries go faster. The purpose of this post is to give the new SQL dude a quick mental connector while learning the concept.
Clustered Index:
Think of the White pages phone book you have at the house. Now, find the entry that has your phone number in it. Mentally, you think of your last name (parameter 1 in a query), then your first name (parameter 2). So if your last name is Gates, you flip directly to the G section. If your first name is Bill, you go to the B section within the Gates area. At that point, there may be more than one Gates, Bill entry, so your eyes start scanning through them for some other piece of identifiable info, such as a street or city (parameter 3) until you find the correct entry. You then slide over to the right, look at the phone number and return that to your brain.
The White pages are a "Two-column" clustered index on lastname, firstname in alphabetical order (Ascending). The data itself (the names) IS the index. No extra pages at the back of the book. Speaking of extra pages....
Non-clustered index:
Think of every technical book or textbook you've ever read. There is almost always a collection of additional pages at the back of the book called the index. These pages do not contain any of the data about the topic at hand...just pointers to where in the book you can find what you are looking for.
Imagine you are holding a SQL Server 2005 administrators book, and you want to find every reference to "Replication." Yes, you can look in the table of contents but that may leave out an entry found in the Backup/Restore chapter, or Performance troubleshooting. So, you go to the back of the book, look through the alphabetical list of Keywords for "Replication" and now you know that the word exists on pages 45, 255-298, and 453. You have a collection of pointers to the specific data pages in your book that may contain the information you need.
What if I don't have any indexes?
No clustered index: Imagine finding your name in a white pages that was not sorted alphabetically. You would have to start at the first entry and read every single row until you hit it (a table scan in SQL speak). Ugh.
No non-clustered index: Imagine me telling you to find the phrase "SQL Profiler" in that SQL book you bought, after I rip the table of contents and index pages out. Sounds like a fraternity hazing ritual for IT geeks ;)
How many can I have?
Clustered: 1. How many ways can you sort and print the data in the SAME book? 1.
Non-clustered: More than one, depending on the database platform and version.
That's all for today...what indexes to have is not a 101 level discussion, other than to say...whatever you join or search on is a good candidate.
Now you are ready for part of your interview ;)
Kevin3NF
1/14/2008
My number one tip for those considering setting up a Notification Services application....
...stop.
Now, I love MS and most of their products...but not this one. Difficult to set up, no GUI at all for even basic tasks, and almost nobody inside MS Support (as of October 2006 anyway) knows the product well enough to talk reasonably about it. Best of my knowledge there are only three books on it. Having one made me the "expert" at PSS on my last contract, until you got to a tech lead or escalation person.
So...don't. Its deprecated anyway, so that alone should be reason enough. :)
Kevin3NF
1/11/2008
SQL 101 - Replication vs. Log Shipping vs. Clustering
Continuing the "Englishification" of SQL Server for those new to the product...
These three terms are the most incorrectly used terms in all of SQL Server, not just by CEOs and pointy haired managers but by some very sharp developers and more than a few experienced DBAs. If you don't know them, pull up a virtual chair for a 3 minute primer.
Clustering - its all about high availability and uptime
In its most basic form, a Windows/SQL "Cluster" is two or more servers (nodes) attached to a shared storage - a SAN. Only ONE of the nodes is running the SQL Server instance at any given time. So...you can have a 4 node cluster with one SQL Server instance, and it will NOT be running 4 times faster. If the node you are running on suffers a meltdown, the Cluster service moves it to another node. Key Point: This is NOT a fully redundant solution!!! If the SAN dies, your data is gone. Period. Go find your backups.
Log Shipping - move that data!
Log Shipping's sole purpose is disaster recovery. There is a secondary benefit that you can use the destination server as a reporting server if you set it up in a specific configuration.
LS is nothing more than a glorified backup/copy/restore process with a GUI and jobs:
Backup the data on Server A.
Copy the backup files to Server B
Restore the files on Server B.
You cannot edit the data on Server B...just read it.
Replication (no, I won't discuss the different types here):
Replication is all about distributed processing. This means having the data in two different places (Walla Walla, WA and Kissimmee, FL for example) so users don't have to depend on the server and WAN in the remote city. Or for sales/field personnel entering data from their cars.
Replication can be a partial DR solution, but understand that not everything gets replicated (security changes), and only new data gets sent automatically. Schema changes, new tables, etc. do not.
Summary:
- Clustering - High availability is its only purpose.
- Log Shipping - Disaster recovery/possible reporting server
- Replication - Distributed data processing with some DR benefit.
- NONE of these are gonig to increase performance!!!!
Yes, you can combine some of these. Set up two clusters in different cities and Log Ship between them. Now you have HA and DR. Expensive, but effective.
For all the SQL Experts that are chomping at the bit ready to scream that I left out what LUN is, or didn't discuss Geoclustering, please see the post title...this is a 101 level post :)
Thanks,
Kevin3NF
1/10/2008
A recovery model is simply a choice made on each database that determines how much data you can recover if your db goes "toes up".
If you do not backup your database, you have chosen recovery model "terminated" or "update resume"
The 3 that are offered by Microsoft are:
- Bulk-logged (rarely seen, and generally more advanced, so I'm skipping it)
- Simple
- Full
Simple vs. Full is very simply a matter of how much data can you afford to lose if the database is lost and you are forced to restore from your backups.
Simple recovery model: does not allow backing up the transaction logs, therefore you cannot restore any of the data in them. You can only restore to the point of the Full backup and any one Differential you may have.
Full recovery model: You can restore from t-log backups (assuming you have them), right up to a specific point in time. Reduced data loss, assuming the backup files are valid.
When to use:
Simple: When you do not care about the data, such as in a Development or Test environment where you can regenerate from another source. Also useful in a static, read-only database (assuming you have a full backup).
Full: Pretty much any live production database that has Inserts, Updates and Deletes happening.
Switching from one to the other:
Simple to Full: Immediately take a Full or Differential backup
Full to Simple: No specific action required, other than verifying regular data backups are in place and working.
Maintenance plan considerations:
If you have both Simple and Full recovery model databases in your SQL instance, and you create a Maintenance Plan to back up data and logs, you may run into an issue (at least in SQL 2000) where the automated deletion of old t-log backups is failing. Make two plans: one for Full and one for Simple. I have no idea if this issue still presents in SQL 2005.
I hope this is clear...please feel free to comment.
Kevin3NF
1/09/2008
I posted this in the microsoft.public.sqlserver.server newsgroup (response to questions about log file space):
If you build a bookshelf (physical .ldf file) and fill it up with books (transactions), its full.
If you loan 5 books to a friend (backup the t-log), there is space available on the shelf, but the shelf size didn't change, correct?
If you buy another book (DML transaction), it goes where one of the others was.
If you fill up your bookshelf and then buy 3 more books, your only choices (besides stacking) are to expand the size of the shelf (grow the physical .ldf file) or add a 2nd one (MyDB_Log2.ldf) to the wall. Or return the books (failed transaction).
Chopping off the end of the bookshelf (Shrinking the .ldf file) makes no sense, nor does making a shelf that can hold 1000 books, when you'll never have more than 100 there....wasted wall space (disk).
Hope that helps.
Kevin3NF