How I Recovered from the Google Panda Slap

pandaA while ago I wrote about some websites that were hit by a Google Panda Update. One of our readers also took a beating, but has since made a full recovery and left some great comments about his experience. He shared so much I thought It deserved an entire blog post. So I asked him if he’d be willing to share his story with all of you. Thankfully, he obliged…

I am just a little guy with a hobby website that aims to help people cook Italian food like my Italian grandmother did. I grew up Italian American and was lucky enough to spend time with my grandmother and learned how to cook like she did. There are many grandma secrets to Italian cooking of which I reveal to help others.

I’ve worked hard on this website for over 10 years, slowly building unique content over the years and Google loved me. Google and I had a nice relationship. Then comes the Panda and during the month of April 2011. That is when Google broke up with me. Slapped me hard in the face and just left me standing alone and heart broken. I did not know what I said or did to upset Google so. Google just up and left me. I lost 80% of my traffic overnight. Poof!

Once I picked myself up off the floor, I started to research just what the heck was going on! It was like a nightmare that I know a lot of other webmasters have gone through. There was much speculation across the forums but mostly, everyone was clueless about the Panda. What to do?

It was unbelievable what I had to go through to recover. In the end all the hard work was worth it. But during the process I thought I was going to go crazy and just totally lose it! I almost gave up completely and just ditched the entire website, but then something inside me said. “No way am I going to let 10 years of hard work just disappear!” I felt like Rocky getting ready for the big fight. A complete underdog going against a champion!

The good news for me is I was officially released from the Panda on in April 2012, almost exactly 1 year later.

Panda 2.0 is when I got slapped and I was not released until Panda 3.5. released from panda april 21st 2012

I have placed “some” of the steps I took during that horrible year of trial and testing to make this happen. Maybe this will help others and save them from Panda and Penguin hell.

What was so painful about this slap is that I felt I was unfairly slapped. After reading all the articles of what might cause a Google Panda slap, I was doing none of those things. I had all original content and was not practicing any evil black hat SEO stuff.

In the end I did see what may have caused the slap. Part of it was Google’s fault and part of it was mine. (I will share what was my fault below). One of the biggest problems I was running into was that as soon as Panda came out Google could no longer tell who the authors were of the original content. Or at least was having a very hard time doing so. Spun articles and copied content had gotten so bad that Google really had a hard time figuring out who were the original authors. In many cases, Google was giving original author credit to newer articles which were copies of my original content! WoW!

So here we go, some steps I went through to get released from the Panda:

1.) I condensed 500 pages down to 70 content packed pages! – This was a massive under taking that nearly took me out completely! Innocently I had a lot of duplicate content because of the nature of my content and the way I was presenting it. I had a lot of recipe photos with commentary that were divided up into parts. Part 1, tons of photos with commentary, part 2, more of same, part 3, more of same etc… Each part was a separate page. So condensing all the separate parts into one long page was an important change.

2.) I don’t have a single page on my site with thin content – I carefully went through every single page on my website and threw out any pages that had thin content. Especially pages where the template navigation was more than the actual page content! Also see step 1 above. That helped me get rid of thin content as well.

3.) I shaved off a whole bunch of advertising, no above the fold advertising – This has definitely cost me to lose a lot of income from the site, but better to work on the traffic and then figure out the income later. I have since been able to “carefully” add advertising back on the pages, but slowly and not too much. I was guilty of having too much advertising on my pages. Yes, greed was there.

4.) I worked feverishly to speed up ALL pages on the site – I’ve used these tools extensively for every single page on my site which helped me a great deal with flagging areas that were slowing things down.

5.) I’ve cleaned up my old html and used css formatting for all my text and menus – This was a problem of my own doing. I had very messy code because the site was so old. I just kept updating along the way with different html programs never really overhauling all the code. So I had a bit of work to do with cleaning up all my old code. This also helped in the area of speed a lot.

6.) I added 404 and 301 redirects. (THIS WAS EXTREMELY IMPORTANT!) – To those that do not know about this. A 404 redirect sends a visitor to a custom “not found page”. If they are trying to get to a page that does not exist on the site any longer, they are placed on a page that helps them navigate to the new content. This also helps Google quickly figure out which pages I have done away with. This was something I was lacking and was a major help! This was also extremely important because I had deleted so many pages. 100s of deleted pages! I needed to make sure Google figured that out quickly!

I also added a 301 redirect. A 301 redirect points everything to the non-www version of my domain or visa-versa. In my case I picked the non-www version because that is what most of my backlinks were pointing to. This was a huge fix as well. I did not even know this was an issue until much studying up on the subject. Most shared webhosting package keep both the www and non-www version of the domain live which produces duplicate content in the eyes of Google. Extremely important to get this fixed! You also want to 301 redirect and index.html page or whatever your homepage format is to redirect to just Again this was a problem with duplicate content for example: The 301 redirect cures this problem. This is called “Canonicalization”. Here is how Matt Cutts break it down: SEO advice: url canonicalization.

Some more help in regards to “Canonicalization” and “Duplicate Content”
What is Duplicate Content?
Google Webmaster Tools Duplicate Content
Google Webmaster Tools Canonicalization

Google is trying to make the internet smaller so it is more manageable in regards to indexing. If you think about it you could cut the entire internet in half in regards to indexing if everyone picked a preferred domain and redirected it. I personally think this is one of their hidden agendas.

7.) I optimized every single image on the site – A laborious process that was worth the effort for faster loading pages.

8.) I found all the websites/blogs I could find that copied my content and had them remove it – I cannot explain how painful this was to correct. This took months of hard work! The biggest culprit was blogs created on godaddy servers. No contact information on the blogs and I had to work directly with the webhost provider. It took several emails and a lot of patience for each blog in order to get the copied content removed. This was all necessary so I could get Google to recognize that I was actually the original author!

9.) I had to get very social! – I have created a Facebook and Google + page for the website and have been active on both, heavily active on the Facebook page. I have a static website and was not about to redesign the entire site into a new blog format so there could be comment interaction, etc… So to get the static website social I had to create social pages that were interconnected and or associated with my static website and carefully add social buttons through the site. I had to figure out how much social to add. Too much of this “on page social activity” can slow down the page load times significantly.

10.) Added no-index to many pages that I felt Google did not need to index – This was a trial and error process and also a lot of research to figure out what Google considers to be a page with thin content.

11.) De-Optimized where I had over optimized
– I have also worked hard on de-optimizing content where I had over optimized in the past for SEO purposes. I was having great success with SEO optimization so natural I got greedy and kept on optimizing. This was one of the things that slapped me! I was too key phrase dense on many pages. I had to fix this. Google now much prefers natural content.

12.) Added rel=”author” tags and linked to Google + account for original authorship – Matt Cutt’s explains the rel=”author” tag here: Matt Cutts of Google on rel=”author”. This was extremely helpful in regards to getting Google to see me as original author of my content. Also some good information on this here: Google Webmaster Tools Authorship and Using Authorship to Stand Out in Google SERPs.

At first my reaction to the Google Panda slap was anger and frustration. However, now being on the other side of this I am very grateful to Google for this. It forced me to overhaul my website from the ground up which it definitely needed.

Happy Cooking, Happy Times and Share The Love!   ~ Anthony

Share This Article With Friends!

Free eBook: A Practical Guide to SEO


Download your copy of our FREE SEO Guide and also receive valuable blog updates via email.

  • August 6th, 2012 at 5:27 pm Wesley LeFebvre said:

    Hey Anthony,
    Thanks for putting together this great post!

    In regards to item 7, what types of things did you do specifically to optimize your images?

  • August 6th, 2012 at 7:49 pm Anthony said:

    I used a combination of Photoshop CS5 and Dreamweaver CS5. Dreamweaver has a nice built in image optimizer. Between the two I can compress an image while being careful not to lose too much quality.

  • August 6th, 2012 at 8:55 pm ncomputing said:

    Thanks article.

  • August 7th, 2012 at 4:58 am Anthony said:

    I almost forgot. I have to give a shout out to “Webmaster World”

    I’ve spent a LOT of time on their forums when the Panda first hit and learned much! A very friendly and helpful community there.

  • August 7th, 2012 at 11:12 pm Raj Srivastav said:

    These are the basic steps of Google’s Quality guidelines. Always we need to follow this. Then no need to worry about any updates from Google. Thanks for sharing the steps to recover from panda penalization.

  • August 7th, 2012 at 11:22 pm Wesley LeFebvre said:

    Hi Raj,
    I mostly agree. But with some of these recent updates it sure seems like nobody’s safe.

  • August 8th, 2012 at 4:07 am Anthony said:

    Raj – There is nothing basic about Google being confused and mistaking copied content for the original and seeing Google mistake your original content for copied. That is exactly what happened to me when the Panda first rose it’s ugly head. That was a horrific event of which took me 100′s of hours to correct.

    Some of these steps are basic Google guidelines, yes. However, it is no basic thing to when Google mistakes your website for a spammy site that has copied content and you have to convince Google they made a mistake.

    Also, there are basic guidelines, but there is a balance of all these steps that requires much trial and error.

    I have spent a solid year of extremely hard work to get released from Panda’s grip.

    This article is not about designing a website that follows Google’s “new” basic guidelines. (That’s easy) It’s about what is required to get released from Panda once Panda has placed you in limbo. It has been labeled as being “Pandalized”. It’s even worse when you were already following the guidelines and still got slapped! Overnight I saw coutless websites that had copied content. My content word for word and they were in better position than I was. An obvious confusion Google part.

    I agree there were a few mistakes I made along the way prior to Panda that may have contributed to the slap. But for the most part, Panda had flaws when it first arrived and made several mistakes in analyzing website content. I have learned a great deal from this Panda event and in the end have become a better manager of my original content.

    I just wanted to spell out some steps I took that successfully pulled me out of a nightmare that I thought would never end.

  • August 8th, 2012 at 1:46 pm Wesley LeFebvre said:

    Hi Tony,
    That is definitely one of my biggest gripes too. When this site was initially hit by Penguin the content was getting outranked by several blatant scraper/spam sites. That was extremely frustrating. Luckily, I’ve been able to clean up several backlinks and am finally outranking most of them again. But there’s a few out there I haven’t had any luck getting removed. I think it’s pretty crappy I’m getting dinged for stuff out of my control.

  • August 8th, 2012 at 1:59 pm Anthony said:

    Yes, can be very frustrating. Perseverance and tenacity is in order. Just keep writing letters and threatening legal action, etc… Eventually you can get them removed. The hard part is when there is no contact information on the website/blog that is doing the copying. Then you have to start emailing the webhost provider directly and that is a long process. Especially if the servers are in another country! There was one blog that copied an entire recipe of mine in full. It was a blog with no contact info and it was hosted on bigdaddy servers. It took me several emails and about 4 months to get it removed, but I did succeed in removing that thorn. No easy task! But so worth it to have it gone. I noticed the difference in position quickly after it was removed and Google indexed accordingly.

  • August 10th, 2012 at 5:53 am Carlos said:

    Hi Anthony,
    thank you very much for your post and sharing your experience. I feel as if I was you. What you said is exactly what happened to 2 of my sites. I lost 80% of my traffic although I just had unique quality content and no blackhat.
    The same as you, there are also a few sites copying exactly my posts (even things surrounding the text) and I cannot get them remove it.
    Furthermore, I don’t consider this to be our task.
    Google want us to ask this question when doing SEO: would I do this if search engines wouldn’t exist? Well, if search engines wouldn’t exist, I wouldn’t spend any minute trying to make others remove copied content. I do my work and Google has to do its work, which is, knowing who the original source is.

    but I guess this will happen just on my dreams. So I’ll have to keep fighting for my sites instead of creating good content :(

  • August 10th, 2012 at 11:58 am Anthony said:


    Keep creating good content and don’t give up. I almost through away 10 years of work and gave up in despair but then something inside me said fight the machine! I’m glad I did. Fight hard to get that copied content taken off servers. Use legal action letters if you have to. Be consistent. If you can prove it’s your content legally they will have to take it down. If the webmasters are not responding back to you or there is no contact information at all, then do a whois on their domain and find out how is hosting their website, then contact the webhost providers directly about the copied content. Most web host providers have official complaint forms you can fill out. They will respond, you just have to be a thorn in tier side about it until it’s gone. Do some research on the term “DMCA Complaint”. Lot’s to learn there. I used that a lot.

    Also work hard at adding author information to the code of your pages as i talked about in the article in step 12. That was crucial in finally getting Google to see that I was the writer of the content. Go watch the original Rocky movie with Sylvester Stallone for inspiration and then fight back. You are the underdog in this but you can fight!

  • August 10th, 2012 at 12:25 pm Carlos said:

    Thanks for the tips and motivation Anthony. I’ll have to fight because as you said, I don’t want to through away years of work.


  • August 12th, 2012 at 8:35 pm Jon said:

    Wow, what a great, thorough post. As unfortunate as it is being hit by these updates, it is cool that website owners are being forced to straighten up their websites.

  • August 13th, 2012 at 6:44 am Anthony said:

    For those of whom this may help. In regards to step 6 I noted above; This is a good relevant article I jsut read this morning which reveals much and the subject of Canonicalization.

    Here Is Why You Need To Manage Your Canonicals Right

  • August 18th, 2012 at 2:19 am Carlos said:

    Hi Again Anthony,
    one more question for you. After having sent emails to the webmasters of the websites that copied my content, and having received an answer nor have they removed the content, I want to start legal action or better find a way to tell Google or some other institution that those websites have full posts copied from my site.

    Did you this as well? Which institution did you reach to? Where can I complain to Google? What I’m not sure is, if I am the author of the original content posted on my blog, is that enough to have the copyright of that content, or does it need to be registered somewhere in order to have the copyright?

    Sorry for so many questions, but maybe you already did some of this.

    Thanks again

  • August 18th, 2012 at 7:24 am Anthony said:


    Forget about trying to tell Google. Impossible route. Way better to get the content down and then Google only sees your version and deems you the original author. What worked best for me is contacting the webhost providers directly. Show them proof that I was the original author of the content. There are several ways to do this. The Way Back machine is one of them. So you show the web host providers proof that you are the original author and then you give the the specific page that resides on their servers which has the copied content. If the webhost providers agree that it is copied contact they contact the webmaster and give them a warning to remove the copied content or have their account terminated. There were a couple of instances where the blog that was copying my content was taken down completely because they were violating a lot of agreements with their web host. The web host providers will work with you if you can proof you are original creator of content.

    You will also want to make sure you have proper formatting of your DMCA Complaint letter.

    There are a lot of DMCA Complaint letter templates out there that give you a great place to start. Here are some templates:

    Also there are always services that will do this for you for $$$. If you don’t have the time, this is a good option. Here is a good one to start with, even if for advice.

    Hope that helps.

  • August 19th, 2012 at 1:22 pm Carlos said:

    Great advice Anthony, thanks!
    I already achieved that 1 blog who copied my content removed the post ;-) The other ones don’t.
    Do you have experience how to contact the person/webhoster of a blog from blogspot? Since it’s a Google property, in the WHOis I don’t find much info.

    Thanks again for your support.

  • August 20th, 2012 at 4:55 am Anthony said:

    Carlos, I’ve never had to deal with blogspot in regards to copiers. The WHOis is usually my go to when there is no contact in the blog. There is usually a technical contact in the whois. If not, one method that works sometime is to leave a comments in one or several of their posts in regards to the copied content. Those comments get to the owners.

    If none of the above works, I would recommend starting here:!forum/blogger

    Post your dilemma on their forum, I’m sure you will get some answers there.

    Good luck and don’t give up!

  • August 23rd, 2012 at 6:36 pm Rene said:

    I recently put a simple 5 page pure html site on an exact match domain I have been holding on to so that it starts aging and getting indexed just in case my main site doesn’t recover from penguin/panda/manualaction. I spent a couple of days just writing the homepage text, the other 4 pages were contact us, privacy policy, terms of use, and disclaimer. 5 days after the site was live, it have been completely indexed under another domain. Some forwarded and masked an old domain to mine, so that it looked like the content was actually on their domain. So when searching for the cached version of mi site on google, it would show my content but cached under the other domain. Luckily i was able to add canonical tags to my pages and google recognized my page as the original content. it was a pain in the ass to diagnose, but i learned a GREAT lesson. when ever i add new pages or new content to my sites i go and fetch as googlebot from WMT so that i have prooof that the content was on my site first.

  • August 24th, 2012 at 3:45 am Anthony said:

    Hey Rene,

    Wow, a pain in the butt indeed! Sorry for your troubles there. The “Fetch as Googlebot” feature in WMT is awesome. I use it every time I add a new page just to make sure Google sees it, but never thought of it being a tool to prove to google it’s your page and where it originated from. Glad I’ve been using that :-)

  • August 27th, 2012 at 9:38 pm Glenn said:

    Hmm. Spent hundreds of hours, deleted/redirected hundreds of pages, changed onpage layout, de-optimized, re-optimized, added Authority, worked Social Media more heavily into the mix. And then magically released after 1 year.

    How does one know what REALLY mattered or made the difference? Was it a Timed-Penalty? (cuz many Google slaps are) Was it really just alot of ‘Freshness’ signals, or removal of some key negative Flags holding down the site?

    Is this the kind of life we really want to be living?!? Slaves to infinite, dynamic changing variables with little immediate Cause-And-Effect?

    Personally, I’m stepping away from feeding this Monster. Goog can love or hate my content, layout, backlinks, whatever. There’s 1001 other ways and places to get traffic from – and life’s too short to place it all in the hands of one corporation.

  • August 28th, 2012 at 6:07 am Anthony said:


    I can totally understand where your coming from and respect your thoughts on this. One thing I have found though over the years. If you make Google happy with your website, all other search engines are pretty happy with your website as well. Using Google’s standards as a guide line helps your website across the board. Yes, there are 1001 other ways to get traffic I agree, but you cannot deny the high volume that comes from Google and the market share of search they dominate. It just can’t be ignored. Well, hard to ignore anyway.

    The above steps I took were trial and error and with much monitoring along the way.

    The magic moment really I believe is when Google “finally” agreed my original content was really my original content. I think that is what released me in the end, however all the other hard work had much to play in this saga.

    In Google’s defense, I was forced to make my website a better experience for the end user and that has helped me.

    Although I still say the first few Google Panda roll outs were highly flawed and had many mistakes in regards to fully understanding content.

  • August 29th, 2012 at 5:37 am Anthony Baker said:

    I just read this excellent article in regards to Duplicate Content & Canonicalization!

    Tutorial: How to find and fix duplicate content on your website

    It relates to step 6 in article above. I thought I would place it here for further explanation the whole Duplicate Content & Canonicalization issue and how to go about tackling the problem.

  • August 30th, 2012 at 5:51 pm Wesley LeFebvre said:

    Thanks, Anthony. I’ll check it out.

Leave a Reply?

free trial button

Free eBook: A Practical Guide to SEO

Free SEO eBook

subscribe to seo rankings via email Download our FREE SEO Guide and receive valuable blog updates:

subscribe to via rss RSS follow seo rankings on twitter Twitter - more ways to connect and subscribe.