SEO Audit for Personal Injury Lawyers

We use a suite of tools that make the process incredibly easy to follow along and eliminate issues one by one without missing anything. We’re going to break down that process step by step so you can follow along. In order to show you how we do our personal injury lawyer SEO audits I am going to go through an example since they’re all so different. This should give you some insight into how we approach them and how we go about them. Here’s the basic list of what we do:

  1. Go through our SEO Audit template.
  2. Run a SEMRush site audit.
  3. Run a DeepCrawl site audit.
  4. Create a Screaming Frog project.

I’m not going to go through the SEO Audit template on this page because I’ve included notes in each step and to try and do it with screenshots and written instructions it’d make this page double in size. If anyone really wants me to write more detailed instructions for going through it, let me know in the comments and I’ll create a separate page.

With that being said, let’s get started.

I wanted to use an example of a site that I’m not familiar with so I randomly chose Sacramento as a city and went to Valentin.app and set my location to Sacramento. I searched for “personal injury lawyer” and the #10 result was AutoAccident.com. Great domain that caught my eye so I choose this for the example.

So now that I know what site I’ll be doing an SEO audit on, I’m going to head over to SEMRush.com and create a new project for it.

SEO Audit Create Project SEMRush

 

 

Once I have the project created on the following screen I am going to go to ‘Site Audit’ and click on the ‘Set Up’ button.

On the following page the only setting I change is the max amount of pages to crawl, the default is 100 so I change that to 10,000.

Now SEMRush is going to do it’s thing and start crawling, this process can take between 10-15 minutes or even longer for larger sites.

So while that’s running I am going to fire up Screaming Frog, which is a desktop program and DeepCrawl.

I should make it clear that Screaming Frog and DeepCrawl (which I’ll get into later) aren’t entirely necessary as SEMRush covers the same functionalities for the most part. I use Screaming Frog because I’ve been using it for close to a decade and I just like it’s very basic interface, but also because it can never hurt to have multiple data sources. With that being said, if you’re a personal injury lawyer doing SEO for your own website, you probably don’t need more than one. I’d go with SEMRush because all of the other functionalities it offers and you can download Screaming Frog as it’s free for the first 500 pages of the site. DeepCrawl.com also has an inexpensive option at $14 with a free trial if you wanted to just run it once to see if you get different results.

 

 

As shown above in the screenshot, you enter the URL of the site and then where it says pause, had I remembered to take the screenshot before hitting ‘Start’ it would have said ‘Start’ there.

I’m actually kind of shocked, it’s not uncommon to see a site health in the 40-60% range, so the fact that this random example ended up being 82% is a lot higher than I was expecting. However, this is just a walk through so this will still serve its purpose. As you can see there’s 14 errors, which is something we definitely want to take care of. The 2,860 warnings we definitely want to take a look at and see what’s causing them, but whether or not we take action depends. The notices are less urgent but again, we’re going to take a look and see what the issues are.

 

So now let’s just work our way down the list.

The first error which are the pages that couldn’t be crawled due to DNS issues, when I click on it it shows me the pages:

So if you look at these urls you’ll notice they all have something in common, they’re all on a blog subdomain ie blog.autoaccident.com. When I go to their site and click on Blog in the main navigation, it takes me another site: sacramentoinjuryattorneysblog.com. So what I’m going to assume in this case is that they used to have their blog hosted at blog.autoaccident.com and then they switched it to the sacramentoinjuryattorneysblog.com domain but they still have links left randomly on their site linking to the old blog. There’s two main things I’d want to do about this:

  1. Remove those internal links to the old blog and update it to the new blog.
  2. I’d go through and make sure all of the old URLs at blog.autoaccident.com forward to the new blog so that if there’s links from other sites they don’t end up on a broken site. By forwarding these links to the new blog you can pass forth the link juice to the new blog.

I’m going to take care of this using Screaming Frog so for now, I just make a note of it and keep it moving to the next issue.

 

So this error is telling us that these two attorney profiles have the same meta description. On a DEFCON scale this is about a .01. If you have multiple practice area pages with the same meta description, I’d say that’s about a 1/5. Meta descriptions aren’t a ranking factor, but they are a ranking signal because they can heavily affect your CTR. But when it’s attorney profiles, it’s really a non-issue.

I would however still fix this because (a) it takes zero effort to write one new meta description and (b) the goal is to get the site’s health as close as possible to 100%. If you leave the quick wins behind, you’re shooting yourself in the foot.

This is something I see happening all of the time, especially on older sites. You made your site in 1998 and then did a redesign every 5 years and changed your urls each time, the first iteration links to the second, the second to the third, the third to the fourth, etc. This is definitely an issue you want to take care of, and to do this I prefer to use the plugin called Quick Redirects. In a spreadsheet I make a column of all of the first iterations url structure and map those to the current url structure. Then repeat that process for all previous iterations to map them all out to the current url structure then you can import them all.

Important to note, the redirects only work for the exact url path. So if you put https:// and someone links to it with http:// it will create a redirect chain. Same goes for the trailing slash blog.autoaccident.com/ vs blog.autoaccident.com. As best practice I include all variations of the old structures to make sure people are redirected without a loop.

In this example the redirect chain doesn’t even make sense, it’s the nursing home negligence page being redirected to the wrongful death page. I always make sure that redirects are done on a page by page basis. Your backlinks pointing to your nursing home negligence page pass on less juice if they’re going to a page about wrongful death, so it’s definitely worth spending the time to setup your redirects on a page by page basis as opposed to sending all 404s to the home page.

 

Last but not least we got this guy, where somebody accidentally put a period sign at the end of the url so it’s autoaccident.com./ instead of autoaccident.com/. This is another error I’ll take care of with Screaming Frog later on.

Now let’s move on to the warnings.

As you can see there’s quite a few warnings but at a quick glance I can see that it’ll be fairly easy to eliminate most of these.

  1. 1,544 issues with uncompressed JavaScript and CSS files.
    I’m not going to get into how to fix the first issue because with uncompressed JavaScript and CSS and other page speed related issues it can really vary from site to site on how to eliminate them. You can use a plugin like Smush but most of the time the files with issues are hosted externally. If you love this type of stuff then by all means knock yourself out, but for the rest of us I’d recommend hiring someone on UpWork or a similar freelance site. Normally $50-$100 spent can get your site loading lightning fast and save you hours if not days of frustration.
  2. 1,268 pages have duplicate H1 and title tags.
    The next issue is page titles and H1 tags being the same, and without even looking at the pages just by the sheer amount of warnings I can tell this is a theme issue. A lot of themes will use your page name as the page title and the H1. I’d make the page titles unique as part of our on page optimization and that would eliminate all of these warnings. Of course that sounds a lot easier than done, but again, if you’re a personal injury lawyer doing your own SEO I can’t stress the importance of doing keyword research and giving your pages unique titles. I’ve increased rankings for clients from page 3 to #1 overall just by changing their page titles.
  3. 11 pages have underscores in the URL.
    Moving along is underscores in the URLs. I’m a bit of a stickler when it comes to these things. I see a lot of gurus saying you shouldn’t change your site’s URLs but I could give you example after example of changing URLs and seeing major changes (both good and bad) in the SERPs. If I think a URL structure is holding back the site, I’ll make wholesale changes. I might be white knuckling it because after all I’m human, but no guts – no glory. Let’s go!Back to being a stickler, when I was younger and dumber (early 2000s) I spent more hours than I care to admit in ‘flame wars’ arguing with people about how hyphens are better than underscores in urls. I’d also argue that using pipes | in your titles was a huge mistake and that hyphens were the only way to go. I no longer loose sleep over such things, in large part because of how much smarter Google’s algorithm is but at the end of the day – underscores are ugly and have no place in my url structures. Of course make sure you forward the old urls on a page basis but this is definitely something I’d take care of.
  4. 10 pages have too much text within the title tags.
    Again, too much text in the title tags would be taken care of during our onpage optimization as we go through and make unique title tags for every page on the site.
  5. 9 links on HTTPS pages lead to an HTTP page.
    When secure (https) pages link to non secure pages it’s usually in the body of the page or post, go to each page and simply update the hyper link to the https version and you’re golden.
  6. 7 pages don’t have enough text within the title tags.
    Again, titles with not enough text would be taken care of when we do the onpage optimization.
  7. 6 external links are broken.
    Broken external links, usually this is when you cite a source in a page/post and the site no exists. I’d look for a similar source but if there isn’t one then just remove the link and mention the source in writing.
  8. 2 external images are broken.
    External images being broken is a weird one, I wouldn’t expect them to be hotlinking images but again I’d remove that and host the images on the site itself.
  9. 2 outgoing internal links contain nofollow attribute.
    Outgoing internal links being nofollow is strange so I took a look and it’s because they copy/pasted their press release from a press release website and it contained nofollow links. Not a capital offense in my books as we’re all guilty of silly mistakes but this is a good reason why you do regular site audits and stay on top of issues like this. Nofollow links means you don’t trust the link or don’t vouch for it. If you’re using nofollow on an internal link it’s not a good look so I’d definitely want that taken care of ASAP.
  10. 1 page has low text-HTML ratio.
    The last error about low text to HTML ratio is almost always, in my experience, when you have a foreign language on your site. A client of ours has a Chinese version of his practice pages and these SEO tools all give us hell for them so I ended up just ignoring the pages.

That’s it, in this example we pounded through the issues rather quickly where in the real world things like redoing titles would take much longer but if you start with the quick wins first, just by eliminating the errors and all the other warnings we’d probably have this site over 90% by now and it only takes a couple of hours. But we’re not going to stop there, let’s check out the notices.

  1. 2,964 URLs with a permanent redirect.
    URLs with a permanent redirect are problematic because if they’ve been permanently redirected they shouldn’t be linked to on your site. These crawlers are only crawling pages that are linked to from other pages. With an astronomical number like this example, almost 3000, I would assume that the links are right in the navigation and making one quick change would eliminate a lot of these notices. If that didn’t do the trick and they were all ‘in content’ links, I’d do a find and replace in the database because updating 3000 links one by one is a task for the birds.
  2. 1,372 links to external pages or resources returned a 403 HTTP status code.
    This seemed excessive to me as well. I checked the error and it’s because they’re linking site wide to a site called milliondollaradvocates.com. That site loaded fine for me so for now I’d ignore this error and assume maybe it was down for the 5 minutes the site was crawling. If it continues to return the error and it was my client’s secondary site, I’d try to trouble shoot why the crawler was getting a 404. If it wasn’t my client’s site and they were linking to it for some other reason (I’d probably try to convince them to remove the traffic leak but assuming that didn’t work) I’d just ignore this notice.
  3. 826 pages have only one incoming internal link.
    Pages that only have one incoming link can be an issue if they’re important pages. For example, if you’re a car accident lawyer and your car accident practice page only has one incoming link this would be a major issue. In this case, they link to VCards of their attorneys on their attorney profile pages and this is triggering the notice. I’d ignore these.
  4. 649 pages need more than 3 clicks to be reached.
    Taking more than 3 clicks to arrive at a page is similar to the above error. If they’re important pages then I’d do something about it, make them easier to access from the homepage. However if they’re not important pages but I can’t remove them, then I’d ignore this notice.
  5. 217 outgoing external links contain nofollow attributes.
    217 outbound links having nofollow is a bit of an issue. If they’re personal websites of people leaving comments, sure, I’d leave them. However if they were nofollowing all of their sources in an effort to hoard their link juice, then I’d remove it. I wouldn’t put their previous SEO company on blast over something like this because while it’s a poor practice it’s most likely a remnant of simpler days circa 2010-2015.
  6. 68 links on this page have non-descriptive anchor text.
    Links with non-descriptive anchor texts is a user experience issue in my opinion. What this means is that there’s a bunch of links on the site with anchor text of ‘here’ or ‘click here’. If it makes sense the way it’s done, then I wouldn’t change it but it’s a bunch of instances of ‘In order to contact us, click here.’ I’d change that so it read ‘Visit our contact page.’ with contact page being the anchor text.
  7. 27 orphaned pages in sitemaps.
    Orphaned pages mean pages that aren’t linked from anywhere. If they’re old drafts or pages etc I’d remove them. If there’s a valid reason for it then I’d exclude them from the sitemap. I can’t think of a scenario where you’d want an orphan page in your sitemap.
  8. 23 pages are blocked from crawling.
    Pages being blocked from being indexed/crawled is somewhat normal. For example, on your blog if you have categories with pagination like site.com/blog/category/page/2 then I’d exclude them from being indexed and crawled. In this example, strangely enough, it’s pages with good content on it like “What’s an ankle sprain?”. I checked to see if it was duplicate content and it’s not. I didn’t dig too deep but this is most likely an error, someone dun goofed and this is a big one.
  9. 4 pages have more than one H1 tag.
    Pages having more than one H1 tag is something I’d get rid of during on page optimization. You only ever want one H1 tag per page. You can just go in and change one of the H1s to a H2 and problem is solved.
  10. 4 issues with blocked external resources in robots.txt.
    Blocked external resources is another weird one. They embedded a couple of TED talks on a page. I checked the robots.txt and don’t see how the resources are being blocked, so I’d need to figure this out on a technical level which I’m not going to do right now. My hunch is that it’s going back to the same issue as them using nofollow links for external sites. Someone was trying to be greedy with the link juice.
  11. 4 links ahve no anchor text.
    No anchor text is just an html error. I’d definitely fix this because there’s no reason not to.
  12. 2 subdomains don’t support HSTS.
    HSTS is just a setting, again I’d fix this because three’s no reason not to.
  13. 2 resources are formatted as page link.
    Resources formatted as a page link is just a silly html error. Someone linked to an image the wrong way, so I’d fix this because there’s no reason not to.

Okay, so now I went back to Screaming Frog and it didn’t find the blog.autoaccident.com or the autoaccident.com./ links. In SEMRush it didn’t say what their referring URLs were so instead I used the DeepCrawl.com audit. You just pop in the url and make sure the crawl limits are high enough to cover all pages on the site, once it’s done running it took me to this page:

 

Deep Crawl found the URLs and provides the pages that are linking to those URLs, so I clicked on the Failed URLs and it takes you to this page:

The “URL” is the url on the site that’s causing the problem, the “Found At URL” is the page that links to the problem url. I am most curious about the blog.autoaccident.com URLs so I’m going to look into that one which is on autoaccident.com/airplane.html and it links to the “four elements negligence” page on the old blog, so I control+f and put in “negligence”, the second or third link was “negligence still occurs sometimes”.

And there we have it, we can see where the links to the old blog are. Now that I’ve confirmed my suspicion that it’s random links throughout the content I’d go to each page and update the links to the new blog one by one, again making sure that the links are going to the correct pages and not just the homepage of the blog.

Now that all of the issues have been resolved from the SEMRush report, I’d go through and run the audit again and make sure we’ve taken care of everything that we can. Once that’s done I’d run the DeepCrawl report again and see if there’s any other additional issues that they’ve found and correct for those as well.

With DeepCrawl and SEMRush you can schedule automatic reports and have them email you every month letting you know of any errors. I always make sure I take the time to go through the errors rather than just archiving the email without looking at it as some people might be inclined to do. You could have a 99% health rating in the SEMRush audit but the 1% could be a major issue that’s costing you money, so it’s always important to stomp out any issues as they come up rather than delaying.

Leave a comment below if you have any questions about running an SEO audit, I’ll be happy to answer all of your questions.

Related posts