Ooops! Was that was me? (Blog Challenge)

We have all made mistakes in our careers, I thought I’d share one of mine as a quick tip to others so that you don’t make the same one.

Everyone has their SQL Alerts setup right? If not, I have included the script below and here is the MSDN link to find out more (https://msdn.microsoft.com/en-us/library/ms180982.aspx).

alert-list

For those who have setup their alerts, how many of you have remembered to set the DELAY BETWEEN RESPONSES setting?

alerts

When I worked at the Port of Virginia, I was a little less experienced in SQL and didn’t notice this lovely little option. I of course failed to set it. Can anyone guess what happened? YEP, we got low on resources in the wee hours of the morning and SQL kicked off an Error 017-Insufficient Resources. Thousands of emails were generated and caused the Exchange server to go down as well as some other issues that arose because of this. The worst part is that all the emails had to finish processing before we could delete them from the system. I think when all was said and done there was well over 250k messages it created.

So the morale of the story is, pay attention to this little tiny option when you set up your alerts your Exchange Admin will thank you for it.

Blog Challenge

oops

Do you have a “Oops was that me” story to tell? If so, share it using hash tag #sqlmistakes. Link back to this blog, so we can all learn from each other.  I can’t wait to hear your stories.

Create Alert Script

 

Run Book, Run!!!

run-bookHow many of you actually have a “Hit-by-the-Bus” handbook? What is that, you ask? It is a document that explains how to execute all your jobs and SSIS packages. In addition, I preference mine with all key elements someone might need, like where passwords are stored, architectures, backup times, where are backups stored, etc… then dig into the job steps. The purpose of this document is so that someone with some SQL skills could step in if needed. You never know when you will be hit by a bus or win the lottery and someone has to take over for you.

Important things to note:

  • Step by Step with Pictures
  • Diagrams – Pictures are worth a thousand words
  • Plain English— Do this, then this, because of this, and watch out for that
  • Jobs- Rerun information, what to do if fails, what not to rerun when
  • Make a HARD Copy

Here is an example:

SERVER NAME

JOBS

LoadEDIDataandValidate: Imports a file \\EDI_FTP\CUSTOM_HOLD_RELEASE\EDI35020110908.log of EDI records that were sent from Gentran to Server A and Server B. It then validates that Server A and Server B have posted those records to their systems. Alerts are sent when something does not post with 15 minutes or record is in QUEUE status on Server B for more than 60 minutes. Server A and Server B data are kept separate on purpose do not combine those tables. As of 3/9/2015 It also sends out a TXT message if count is >50 that have not been posted.

Schedule: Runs Daily every 15 minutes between 2:16 and 11:21 am. This corresponds with 15minutes after Gentran begins and ends its daily processing.

Steps: Executes SSIS Package EDI350Import.dtsx and executes 2 stored procedures; jobValidateEDIServerAEDI350ServerB and jobValidateEDI350

Rerun: Can be rerun any time. Right Click on Agent job and Choose Start Job at step… There is only one.

sample

Here are some other examples of rerun information (try to be a clear as possible:)

Rerun: Can be rerun prior to 4 pm. If run post 4pm you’ll have to manually change the date (@pdate) of the data being pulled. Always verify no partial data was brought into table before rerunning clear out any data loaded.

Rerun: Do not rerun. Load the data manually to Server X for any missing data and use date_billed as key field for data pull

Rerun: This job will fail if there is a duplicate XXX number. You’ll need to resolve the duplicate before you can successfully rerun. It can be rerun prior to 4 pm. If run post 4pm you’ll have to manually change the date (@pdate) of the data being pulled. Always verify no partial data was brought into table before rerunning clear out any data loaded.

Why Share My Knowledge?

Don’t try to build job security into what you do. I know many that worry about giving up the knowledge to others. Having the sole “how to” knowledge for some, gives them a sense of job security. While to a point that might be true, it also locks you in to your current position. Many that hoard their knowledge never advance because they find themselves invaluable in their current position. “We can’t move them because they are the only ones who know about such and such”. Why put yourself in that position? If you can’t ever be replaced, you also can’t move up.

As a lone dba, I find this run book to be vital. It allows me to direct someone to the book and I can walk them through running anything I need them to in my absence.  It allows me to take a vacation or a day off while giving others the tools to get things done.

Why is it important to have a hard copy?

I’ve found over the years having some tangible steps in hand to follow and make notes helps those who have to cover for me. It’s very easy for them to grab a book off my shelf and follow step 1, 2, and 3. It also gives them a place to take notes as they go through the steps that I can later use to modify documentation for better clarity.

If you don’t have a run book I highly suggest you take the time to make one. Now keep in mind a run book is only a helping guide. I automate as much error handling as possible and build in code to minimize the use of this of this book.  However, in my opinion it is invaluable.  The book can give you some space for someone else to cover for you and when that day comes when you win the lottery, you will have left everyone with great notes on how to run things.

Now, off to buy that lottery ticket. Wish me luck!

Hide and Group Columns in SSRS Using a Parameter

Ever had users come to you and request another version of a report just to add another field and group data differently? Today, was such the day for me. I really don’t like have multiple versions of the same report out there. So, I got a little fancy with the current version of the report and added a parameter then used expressions to group the data differently and hide columns. For those new to SSRS I’ve embedded some links to MSDN to help you along the way.

Current Report

The report gives summarized counts by invoice date.  It currently has a ROW group using date_invoiced and the detail row is hidden from user.

current-report

row-group-2

group-exp3

New Version

To complete the user request to have Item Codes and Descriptions added to the report I need to find a way to group the data by Item and show Item columns without disturbing the current report that is currently used by many consumers.

To Do:

  • Add Parameter
  • Set Available Values
  • Set Default Values
  • Add New Columns
  • Change Visibility
  • Change Grouping to group data using parameter

Step 1: Add Parameter

add-para-4

 Step 2: Set Available Values

add-values-5

Step 3: Set Default Values – I want to make sure my current users get their version of the report simply, so I set it to No (N).

add-default-6

Step 4: Next Add Columns.  I was lucky that the fields (Item Code, Item Desc) the user requested to be add was already part of the dataset used, so no additional coding was needed on the stored procedure.

add-fields-7

Step 5: Next change the Visibility attributes. You want to HIDE the column when the IncludeItemDetails parameter is NOT YES (Y). I did this for both item columns.

visibility-8

visibility-9

Step 6: Next I needed to change the grouping. The report is currently group by date_invoiced only. To make the data now total by Item I need to group it by Item only when the IncludeItemDetails parameter is Yes (Y). I did this using an IIF expression setting it to IF IncludeItemDetails=Y then group using field value else don’t (0). Again I did this for both fields.

grouping-10

expression-11

espression-12

You will see it’s relatively simple to do, and prevents a whole new report version from being created. For you beginners out there, it’s a very easy way to start to minimize the number of reports you have to maintain. Try it.

 

 

T-SQL Tuesday #84 – Helping New Speakers

Ok everyone; here goes my first crack at replying to a T-SQL Tuesday. For those that don’t know what it is, it’s a Monthly blog topic hosted by a member of the SQL Community. It was started originally by Adam Machanic (t | b)

This month’s topic hosted by Andy Yun (t | b) is on Growing New Speakers, which I find to be a perfect topic for me to leap off from, since this was my first year speaking and blogging.

How did I get started?

I 100% blame Derik Hammer (t | b) whom at the time was running my local user group. After attending just one meeting I was “volun-told” I would be presenting in August. Yep my name was now on the speaking calendar and I hadn’t even thought of a topic, let alone ever contemplated speaking.

My First Steps to Presenting

After the shock wore off, I sat back and began to think of anything of value I could talk about. Since it would be my first time speaking I really wanted a topic I could talk about and not necessarily a technical talk. Thus my Lone DBA talk was born. Everyone has something of value in their career to talk about, for me this seemed logical.

Simple Steps to Get Started

Where to begin is always the hardest part after choosing a topic. This was my approach. Of course there is a lot more to it, but getting this far a huge step forward.

  • Jot down a list of things you want to talk about
  • Then put them in a logical order
  • Then write a sentence or two about each line item

Just taking the time to do this will get you going.

Don’t Be Nervous (HA! Yeah Right)

It’s very hard not to be nervous. The way I “try” to get around this is to strike up a conversation some attendees prior to the start of the session while you are standing up front.  I pretend after the session begins that I am still having that one on one conversation with them.  For me it creates a “friendly” atmosphere rather than one like a teacher\ student. Now my biggest problem is talking fast, I try REALLY hard not to but it’s bound to happen as I get excited about the topic. My point is nobody is perfect at speaking everyone will have their fault, don’t let it discourage you.

Lastly

Start with your user group, listen to feedback, have another review your slide deck, and most of all enjoy it. There is nothing like a “speaker high”. Being able to share your knowledge and influence just one person is very rewarding.

Challenge Accepted

My life for the last 2 years has been a constant battle of putting out fires with system performance; finally user complaints have moved getting this resolved as my top priority.

Let’s see how I tackled the problem…

Symptoms:rubix4

  • Very High Disk Latency as high as 300,000 milliseconds (ms) is not unusual
  • Average: 900 – 15,000ms
  • Memory Pressure
  • Slow User Experience

Problem:

  • Bad hardware
  • Over-provisioned VM Hosts (what happens on one VM effects the other)
  • Old NetApp SAN
  • No infrastructure budget for new hardware

Challenge: Make the system viable with no hardware changes or tweaks

Step 1: Brain Storming (in no particular order)

  • Reduce I/O
    • I can probably tune a ton of old stored procedures
    • I need to do a full review of all indexes
  • Reduce blocking
  • Investigate daily data loads
    • How is the data loaded?
    • Can it be improved?

rubx3Step 2: Reduce I/O & Investigate daily data loads

After doing some research, it was found that we were truncating 48 tables daily with over 120 million records as part of our morning load. The process was taking over 2 hours to complete each morning and would often cause blocking. During this time users would run reports and complain data would not return in a timely manner. So I thought maybe this would be a great place to start.

I also noticed we were loading 8 tables to keep them “real time for reports” once every hour.  This resulted in a total of 9.6 million records being truncated and subsequently reloaded, taking approximately 17 minutes of every hour.

Solution: Implement transactional replication instead of doing hourly and morning truncate and reloading of tables.

Outcome: Once implemented the disk I/O dropped drastically and disk latency reduced to an average 200ms. The morning load times dropped from 2 hours to 9 minutes and the hourly load went to 5 seconds down from 17 minutes. Now, the disk latency is not optimal still but better. Best practices say it should be below 20ms.

This solution was difficult to accomplish because of all the work that went into it. Once the replicated tables were stable, I first identified which stored procedures were utilizing those tables (I used Idera’s SQL Search for this). Then I changed each procedure to read tables from new location.

Next, I had to change any SSRS reports that had hard coded calls to those old tables (Note: don’t do this. Always use a stored procedure). Finally, I looked for any views that called the tables and adjusted those as well.

In two weeks’ time, over 500 stored procedures, reports and views were manually changed.

It is probably worth noting that this was all done in Production simply because we do not have a test environment for this system.  Yes, I did get a few bumps and bruises for missing a few table calls in store procedures or typo’s or nasty collation errors that arose.  These were bound to happen and some changes I was not able to test during the day.  All in all it went really well. Having a test environment would have alleviated these, but not all of us have the luxury.

rubix2

The OOPS: Unfortunately, not long after I implemented the first couple of tables I began to notice blocking. When I investigated I found it to be replication. I forgot a very important step, which thanks to a blog post by Kendra Little I was able to quickly identify and solve. I needed to turn on Allow Snapshot Isolation and Is Read Committed Snapshot On. Her blog was a HUGE help. You can read at her blog all the details as to why this is important here: http://www.littlekendra.com/2016/02/18/how-to-choose-rcsi-snapshot-isolation-levels/ . Once those to options were implemented the replication ran seamlessly and the blocking disappeared.

Step 3: Index Review

First of all, I always preach as a Lone DBA don’t waste your time reinventing the wheel, use what is out there. So I turned to the trusted scripts from Glenn Berry (B|T). You can find them here: https://sqlserverperformance.wordpress.com/2016/06/08/sql-server-diagnostic-information-queries-for-june-2016/ . I am not going to supply snippets of his code, feel free to down load them directly from his site to review.

I started by reviewing duplicate indexes and deleted\adjusted accordingly where needed. Then I went on to looking for missing indexes (where some magic happens). This reduced the amount of I/O because it lessened the amount records that had to be read due to using proper indexing.

Now just because these scripts stated they were missing I didn’t just create them; I evaluated their usefulness and determined if they were worth the extra storage space and overhead. Glenn’s script gives you a lot of information to help decide on the index effectiveness. As you can see with the first one in the result set, if the index was added over 45,000 user seeks would have utilized it and query cost would drop on average by 98.43%.  Again I didn’t arbitrarily add this index because it was in the list.  Once I determined I would not be creating a duplicate or similar index on the table and given the potential of better performance with the suggested index, it was added.

index

Oh one more OOPS…(why not, learn from my mistakes)

After going thru the indexes exercise and adding indexes to the tables (in the subscriber), I lost all of them minus the Primary keys. Yep, made one change to a replicated table and the replication reinitialized; all my indexes were dropped. Needless to say I was not a happy camper that day. Lucky for me each index I added was scripted and put into a help desk ticket. I was able to go back thru all my tickets and resurrect each index I needed. Now, to be smart, I have scripted all of them and place those into one file, so I can re add them all if needed in future. I haven’t found a way around this yet, so if anyone has any information on how to feel free to let me know.

Step 4: Performance Tune Slow Stored Procedures (the fun part for me)

Armed with Grand Fritchey’s (B|T) book on Execution plans for reference I began tuning any stored procedure I was aware of that was taking more than 2 minutes to run. In total, I tuned about 77 of them, most were report related or part of data loads. I found many benefited from indexes being placed on temp tables within the procedures. Others were doing too many reads based on bad WHERE clauses or joins.

Another thing I ran across was functions used in where clauses or joins. Example of which is date conversion functions that were converting both From and To Dates used a BETWEEN statement. The functions caused each date value to be processed by the function before being evaluated by the WHERE clause, causing many more reads then necessary. To work around this I read in the data and converted the dates into temp table, then did my JOINS and WHERES on the already converted data. Alternatively, depending on what the statement was I also converted the value and placed in variable for later evaluation.

There were so many more things I came a crossed and tuned such as implicit conversions, table spools, and sorts that were not optimal. All of these were fixed by little code changes. I am not going into all of that because this post would be quite long, but you get the point.

Happy Side Effects: After cleaning up the tables and implementing replication I actually free up 300 GB of storage and greatly reduced our backup and restore times.rubix1

Summary:

Things are running much better now; introducing Replication reduced enough disk I/O to keep the system viable. For now latency now hovers on average between 2 and 200 milliseconds, which is a vast improvement. I do, however, still see spikes in the thousands of milliseconds and users still complain of slowness when they run large ad-hoc queries within the application (JDE Edwards E1). Unfortunately, that goes back to hardware and the application itself which are things that I cannot improve upon.  The good news is, I am hearing a rumor that we will be installing a Simplivity solution soon. I am very excited to hear that. I’ll blog again once that solution is in place and let you know how that goes.

This Idera ACE Has Been Busy

This year has been a whirlwind so far, thanks to the Idera ACE program. For those that don’t know what that is …

What is an Idera ACE? (According to Idera)

ace

“ACEs (Advisors & Community Educators) are active community members who have shown a passion for helping the community and sharing their knowledge. We help the ACEs pursue that passion by sponsoring travel to select events and offering guidance for soft skill training.”

Requirements to become an Idera ACE:

  • Enthusiastic members & leaders of the SQL community
  • Accomplished contributors to the SQL community
  • Good speaker, writer and presenter
  • Demonstrated a passion for educating fellow community members

Being an ACE has been both a very busy and very rewarding experience for me. Idera has given me the means to be able to share my knowledge as a Lone DBA and help others who are also in this predicament make the most of it. Since October last year, thanks to the generosity of the ACE program and the exposure it has given me, I have started my own blog, presented at a total of 9 SQL Saturdays, and 2 User Groups. I have also hosted 2 Idera #SQLChats on Twitter (links below) and participated in a SQL Hangout with Cathrine Wilhelmsen (B|T).

hangoutSo far, I have given my Lone DBA session to over 200+ SQL professionals, tweeted in SQL topic specific Idera #SQLChats to with a combined over 600 tweet interactions and had 200+ views on a video chat SQL Hangout.

One of my biggest talking points I try to convey is the power of networking and getting “virtual co-workers”.  Making those connections with others in the community is vital when you are a Lone DBA. I speak on the importance of building those relationships with those that can help you with their experience and expertise. Being an ACE has allowed me to vastly grow my network of “virtual co-workers”, by letting me travel to so many SQL Saturdays. I’ve had the pleasure in meeting so many speakers and attendees.  I make it a point at each of these events to make new co-workers and offer up any help I can give others.

The biggest reward for me is after my session is when attendees do their homework. Yes, I assign homework.  During the session, I ask each attendee to take advantage of what the SQL community has to offer by getting on Twitter and begin growing their own personal network.  Usually within a few days, many of them have created a Twitter account and has sent me a tweet.  I then take the opportunity to introduce them to the #sqlfamily.  I get a kick out of sitting back and watching each of them get involved in the community because me. It makes me giggle every time.

Of course, all good things must come to an end.  My year as an ACE is wrapping up in the next few months and I just wanted to take a minute and say thank you to Idera for a wonderful program. I encourage everyone to take full advantage of these types of programs and make the most of what they have to offer. I urge those that do, to not only take advantage for themselves but also to pay it forward. Give back to the community in any way you can. We can all benefit from each other with our shared experience and knowledge. The ACE program has really motivated me to get more involved and contribute to the #sqlfamily.

Stay tune to what comes next for me.

SQL Saturdays

 Washington DC

ABQ, New Mexico

Richmond, Virginia

Atlanta, Georgia

Pensacola, Florida

Louisville, Kentucky

Kansas City, Missouri

            User Groups

Richmond Virginia

Nashville Tennessee

            SQL Chats

Building Name Recognition

Building Your Career as SQL Developer or DBA

 

Summit Submission Feedback Response

I’m It Survival Tips for the Lone DBA – Level 100

(Not Accepted: Higher rated session selected)

Track: Professional Development

As others have done I also will share my feedback from my submission to speak at PASS Summit in hopes it will lend some more insight into the process.

Abstract:

Are you the only database person at your company? Are you both the DBA and the Developer? Being the only data professional in an environment can seem overwhelming, daunting, and darn near impossible sometimes. However, it can also be extremely rewarding and empowering. This session will cover how you can keep your sanity, get stuff done, and still love your job. We’ll cover how I have survived and thrived being a Lone DBA for 15 years and how you can too. When you finish this session, you’ll know what you can do to make your job easier, where to find help, and how to still be able to advance and enrich your career.

Topic: Handling High Stress Situations

Prereqs:
None

Goals:

  • Show how to manage the people you work with (boss, developers, etc) to control expectations around your life and environment.
  • The importance of tools and how to build out the best tool set to support you in your job.
  • Discuss tips on building out your support resources (people, blogs, etc) to help you get through your day.

Feedback:

  • This is more related to dba track rather than prodev. Also is survival really career development? Many would say that working 15 years as a lone dba could equate to failure in some peoples eye’s and I would struggle to want to see this session based upon info provided.
  • Interesting topic; 1st/2nd/3rd person tense shift -bad. Borderline PD topic.
  • I like the title. Good topic and goals. I’d like to have more details in the abstract of what content to expect.
  • Well written abstract with clear goals and a well-developed outline. The topic is one that should appeal to a large audience. The title and abstract are catchy. Overall a really good abstract. Sounds like a session I would enjoy attending.

My Thoughts:
Honestly, I was a little taken a back at the first comment. I found it insulting and not helpful. I am not sure how telling someone working as a Lone DBA for 15 years is seen as a failure. Especially when those of us that do it, manage to do the work load of multiple people by ourselves.  After considering it, I forwarded the comment on to PASS as being inappropriate and unconstructive. I was pleasantly surprised at their response. I give kudos to all the hard work that goes into reviewing the comments before they send them out.

Secondly, I fully understand how some would feel that this is not a Professional Development session, maybe I should have put in under Database Administration. I still have mixed views on that. In any case, I have found this session to be well received and always have 15-25 in attendance at SQL Saturday’s. Regardless of the feedback I will continue to submit it to SQL Saturdays and Summit next year. There are many Lone DBA’s out there and I will to continue to lend them a hand by sharing my 15 years’ experience with them.

 

SSRS Report Won’t Render in VS Preview

I love getting get a laptop, but getting all the software reinstalled and making sure everything works can be trying. Last week, I was lucky enough to get a new one and spent two days getting it setup just right. At least so I thought…. once I started working on it of course, BAM I hit my first road block. Visual Studio using SQL Server Data Tool will not render any reports in the Preview tab.

Let the trouble shooting commence!

  1. Error Message? No help… gives me nothing useful

Capture

  1. Can I deploy report to SharePoint and View? (We use SSRS Integrated Mode)- Success!! This leads me to believe the issue probably lies on my local machine. 
  1. Test Datasets? Can I return data from my query or store procedure connection in Query Designer – Yes. Did I test all my datasetsYes

Capture2

  1. Should I try to uninstall and reinstall? So I did just that. After 2 hours finally was able to test and guess what NO GO! 
  1. Did I install all service packs to VS? Missed one – so installed and tested, still no luck     

vas

—TIME TO TURN TO MY VIRTUAL CO-WORKERS ON TWITTER #SQLFAMILY—-

  1. Try Running Visual Studio as Admin (suggested by fellow Twitter tweep, Martin Schoombee @sqlmartin) – tried… yep no difference
    Capture5Capture4
  1. Finally was given a suggestion to delete my shared data sources and re add them. (suggested by fellow Twitter tweep, John Morehouse @SQLRUS)

I deleted the shared data source for my report I was testing and re-added it. Hit the PREVIEW and BINGO IT WORKS!

So now to see WHY???

Looking over all my data sources I noticed that any of them that use “SQL Authentication” had the user blanked out. Any reports that had used Windows credentials worked, of course, first 5 in my project were all SQL Authentication, just my luck. So instead of actually deleting and re-adding all 30 shared data sources in my project, I was able to go through and just re-input the SQL user names and passwords.

Double Click on Share Data Source

Go to Credentials

If SQL Authentication re-input user name and password

Click OK

Capture3

Questions still remained as to why my data sets tested ok and returned data. My guess is that it was using my network credential to connect to the data source upon execution. That’s my only explanation and reason why it didn’t dawn on me to check the shared data source connections. The second question is to why the user names were wiped out. My assumption is that they are locally stored and were not carried over to new laptop.

Since this was an interesting mystery I figured I would do a simple blog, so anyone else that may have this issue can have a reference. Hope it helps.

The Shield

small shieldHow many of you are known as the “Grumpy DBA” or have a bad reputation with users because you are always saying no or they have to wait? I know many DBAs that have this reputation. To avoid this, I use my manager as a shield and suggest you do too. As a Lone DBA, with an extremely full plate, I learned that having that shield is necessary. It prevents me from being seen as the bad guy and protects me from work overload.

We all experience what I call, “Drive Bys”, when people are asking for stuff on the fly. Telling someone “No” while they are waiting in your office can be hard to do and can reflect poorly on you.  So how do you avoid that? While you probably cannot prevent the drive by, you can however; fix the perception the user has as they walk away. When drive bys occur I take time to listen to the user’s needs, let them know I will look into it, and then follow up with my manager without giving a yes or no to the work.  I’ve found this to be not only the best way to keep from becoming a “Yes Man” and trying to fulfill every request, but also keeps me from having to say no.

Using your manager as a shield puts management of the workload on their shoulders instead of your own.  This, in turn, keeps them apprised of the work load, and prevents your plate from getting too full without negative user perception.  My manager has no issues saying no to users or prioritizing requests appropriately.  Doing this removes you from being the bad guy and prevents the opinion that the user’s needs aren’t important to you.

The key to maintaining a healthy user relationship is to make sure their needs are heard and you are doing your best to give them what they need to be effective at their jobs. It’s easy to become the Grumpy DBA when you’re forced to be the nay sayer. With my shield in place, I can tell the users that I passed the request along and their work is being prioritized. If they have any questions they can follow up with my manager to see where their request stands.

So far this works well for me, as a Lone DBA, and has become vital in preventing me from becoming over worked, over whelmed, and burnt out.  If you don’t already have a shield in place, I would recommend talking to your manager and seeing if you can work towards one.

Good luck!

SQL Family: The Wonder Years

Last week, Bill Wolf aka @SQLWareWolf and I somehow got onto the topic of High School pictures. So in jest, I decided to post mine and hash tagged it with #SQLHSPics on Twitter. I challenged others to do the same, only really expecting @SQLWareWolf to respond in kind.  I was floored with over 100 picture responses from #SQLFamily. Many of them went searching through attics, yearbooks, called relatives, and other great lengths to be part of it. As always the response was heartwarming and hysterical to say the least.

Tweethspics

The reason why I am taking the time to blog about it is to reiterate how great it is to be part of this amazing community of SQL professionals. If you’re not already involved, then I encourage you to get involved. These wonderful people not only provide me with mentoring, education, laughter, and mental breaks, but also a true sense of family. Not many know, but I am going through some big things in my life and that week was more difficult than most. The #SQLFamily, unknowingly, helped me get through it with a smile and I am grateful more than you know.

Exhibit A: David Klee, @kleegeek (our winner for most laughs, re-tweets, and memes by far)

THEN &  NOW

CaUAoeVWQAAQLLl

klee

 

 

 

 

 

 

 

 

After this picture was posted it lead to a slew of responses and new hashtags including #myfirstklee, which showed pictures of peoples reactions to David’s picture.

CaptureCapture2Capture3

react2 react react 3

Love to all my #SQLFamily and thank you!

Where else would you find highly professional people posting pictures of their most awkward growing years for us all to comment freely about?

You know that, I can’t end this without posting some of the pictures from that week!

Enjoy!

Hs1 hs2 hs3 hs4 hs5 hs6 hs7 hs8 hs9 hs10 hs11 hs12hs13hs14hs16