Why your website pages must be crawled and indexed by search engine crawlers?
The answer is simple, if your website pages not been indexed by search engines, then you are no where on the Internet, you are none existing on the web. It will be almost impossible for people to find your site.
Search engine's crawler bots can easily crawl static Html pages and index them. Because all the contents are available in a clear readable format, they can easily find pages through clearly mentioned internal links.
|Features||faceFore SEO Tools||Other Renderering service providers|
|Detecting Google & Bing/Yahoo crawler bots?||Yes||Yes|
|Creating static Html snapshots?||Yes||Yes|
|Serving Html snapshots to crawler?||Yes||Yes|
|Generating Sitemap XML?||Yes||No|
|Generating HTML hyperlinks for the created Html snapshots?||Yes||No|
|Can set unique page Title & description on fly, for each created snapshots?||Yes||Unknown|
|Will you be dependent on 3rd party server?||No, you get all at your server||Yes, they are not giving you full source code||Cost per Month||0||US$ 100 to 360 plus|
|Cost per Year||0||US$ 1000 to 2400 plus|
|Monthly pages limit||Unlimited||50,000 to 100,000 (extra US$ 0.50 to 1.50 per 1000 pages if limit crossed)|
|Security Risk||0, None because all the code run at your server. You own the full script||You will have to give Ajax call access to 3rd party, in order to render the pages.|
|Required Phantomjs or other middleware at server?||None||Yes, most of them|
|Required .htaccess or other server config file modification?||No||Yes, most of them|
|Requires Port monitoring & burdening server memory?||No, because no redirection needed||Yes, because they need to proxy/redirection, to run the script at there server|
|Will you lose Pagerank?||No, everything at your server||May be, because the snapshot is served from another location.|
Almost all in one solution for your ajax dynamic contents based website, to make it crawlable and get indexed by search engines.
Set unique page Title & Description on fly for every page, generates XML sitemap, generates HTML href hyperlink for every pre rendered page, creates pre-rendered HTML snapshots and serves the pre-rendered HTML snapshots to search engine's bot.
It automatically handles it, if you are using hashbang URLs (#!) or HTML5 mode.
You can pass, page Title & Description, in the function params and it will set the page title tag & meta tag description attribute for the pre rendered static HTML page.
So your every page can have unique title and description, the search engine crawler bot will love it.
Remember that your app will need to produce an HTML snapshot whenever it gets a request for an ugly URL, that is, a URL containing a query parameter with the name _escaped_fragment_. (Google best practices)
Sitemap is the essential part of SEO (Search Engine Optimization) that presents the roadmap of your site contents to search engines.
A sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine's web crawlers like Googlebot or Bingbot read this file to more intelligently crawl your website.
Submit a Sitemap using Google Webmaster Tools. Google uses your Sitemap to learn about the structure of your site and to increase our coverage of your webpages.(Google support & guidelines)
The internal links to static HTML pages (snapshots), built with HTML anchor, expose pages to search engine and spread link juice. They establish the architecture, increase ranking power and strengthen the overall SEO value of the website. The internal links to you site's static pages will boost you website SEO.
Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link. (Google guidelines)
You will not require to install extra JS framework/library or any other third party middleware software to create HTML snapshots. It is not a complex job, simple PHP or other server side script can handle it better, you will know when you will see the included script, it is just over hyped.
You will not require to modify any server configuration files to serve the static HTML snapshots to search engine crawler bots. The included snippet will detect the bot, and will serve the required HTML snapshot automatically.
You do not require to redirect or proxy to our server or any outside server, everything will happen at your own server. No need to waste server time and bandwidth. Some middle ware uses lot of memory and getting crashed. No need to risk your data and contents.
We are not keeping it secrete from you at our server. Only one time payment and you will get the full code. No page limits to pre-render static HTML pages and no usage boundaries. All will be yours, use it in as much of your projects as you can.