Don't you hate finding dead links? Your visitors do!

This website link checker ensures your site visitors have a good experience by making sure that your site does not have any dead or broken links. The tool will help you find and get rid of those invalid URLs or missing (404) pages. It also provides data to help you check the SEO status and the validity of your HTML. That's for your whole website in one go. And it's FREE!

Download it NOW!

The Website Link Checker Tool in action

It flags any broken links for your attention, indicating why a link is broken and what pages referenced it. It checks pages, include files, external links and images.

But that not it. It's also useful for Search Engine Optimisation because it examines each page found. The page data returned including sizes, titles, meta tags, link references, link text and even html validation issues via Html Tidy!

The software works on Microsoft Windows and is distributed under a Freeware license. So its FREE!

The site link checker starts at a specified web page and checks it is valid. It then searches the pages html content and finds all the links it contains to other web pages and files. The process is repeated for each of the new web pages found. This is much in the same way Google and other search engines index websites. These sort of program is commonly called a robot (bot), crawler or spider.

This is still a beta release so please make comments and suggestions to help me develop it in the right direction.

If you can't get it to work then check out Xenu's Link Sleuth which was the inspiration for this project.

Quick Start

  1. Download the application installer
  2. Install the application by running the download
  3. Start the application (You will find it in Start->Programs->ClockWork)
  4. Click "New Verifier"
  5. Enter a name
  6. Enter a URL
  7. OK
  8. Click "Start Crawling"
  9. Wait...
  10. Click "Errored" to see a list of any broken links it has found
Then fix those broken links!

Features

  • Crawls based on a single domain/website
  • Verifies and loads internal web pages
  • Verifies internal includes such as image, JavaScript and CSS files
  • Verifies external links
  • Gathers page data such as title, size, type, speed, meta data
  • Provides lists of HTML format errors and warnings (via Html Tidy)
  • Analyses Robots.txt files
  • Users can specify a maximum depth to crawl
  • Normalises URLs to minimise duplication
  • Can exclude certain query string parameters from URL comparisons
  • Multi threaded for speed
  • XML/CSV exporter to aid creating website reports
  • XML Importer
  • Keeps track of all link relationships.

Future Features

  • Move towards providing SEO Tips and advice
  • More advanced support for specifying which URLs to load and parse
  • Proxy Support
  • Cookie/Session support
  • Incremental verification based on last modified date
  • Add PageRank discovery
  • Add key phrase analysis

History

1.0.3
  • Added Meta tag capture and specific capture of robots data
  • Parsing now based on Html Tidy to XHTML then XML/XPath processing
  • Can view Tidy errors, warning and a tidied version of a page
  • Speed increases by using HEAD requests
  • Added Robots.txt data gathering
  • Added SiteMap and RSS generator
  • Added ability to write your own XSLT report generators
  • Added ability to ignore query string parameters
  • General improvements and fixes
1.0.2
  • Initial Release