Webmaster News - Edition 04 - March 5, 2019.

In this edition, we look at three different Google Updates, a Major SEO issue with WP Engine, check in on the battle between Rand Fishkin & Google, look at how to improve page loads with Instant Page, look at Chrome's new Scroll to Text feature, and more.

In Edition 04, I look at three different Google Updates, a Major SEO issue with WP Engine, check in on the battle between Rand Fishkin & Google, look at how to improve page loads with Instant Page, look at Chrome’s new Scroll to Text feature, and more.

In This Issue:

SEO & Search News

Google Algorithm Update - 22nd February (Minor Update)

There was a small update on the February 22, 2019. The absence of any obvious E-A-T categories (YMYL) indicate that this is more likely to be a quality-based update. You can read about the update in more detail here.

SEMrush Sensor 22nd of February 2019.
SEMrush Sensor 22nd of February 2019. © SEMrush.

Google Algorithm Update - 27th February (Major UK update)

This update was interesting as there appeared to be different changes happening in different locations, for instance the US & UK.

The UK update contained typical E-A-T categories such as health, and was significantly more significant than the US update. The US update was lacking any obvious E-A-T categories, which makes me believe Google rolled out different changes in different countries. You can read more about the Google update here.

SEMrush Sensor 27th of February 2019 for the US.
SEMrush Sensor 27th of February 2019 for the US. © SEMrush.
SEMrush Sensor 27th of February 2019 for the UK.
SEMrush Sensor 27th of February 2019 for the UK. © SEMrush.

Google Algorithm Update - 2nd March (Major Global Update)

This appears to have targeted the UK and US equally, and is a fairly significant update. I saw significant rankings increase with this update, having previously worked on Content Quality, and E-A-T signals over the previous couple of months.

I go into more detail about my SEO strategy, and the Google update generally here.

SEMrush Sensor 2nd of March 2019 in the US.
SEMrush Sensor 2nd of March 2019 in the US. © SEMrush.

Major SEO Issue with WP Engine

Some genius (sarcasm) at WP Engine decided it would be a good idea to stop bots (including search engine bots, like Google) from crawling paginated pages past the 9th page on websites hosted by them. Instead, they would redirect the bots to the homepage.

With larger sites or sites with review pages, these paginated pages can be the only link to the content listed on those pages. If Google cannot crawl those pages, then it can do serious harm to your SEO efforts.

I honestly think this is a gross failure on WP Engine’s part, all under the disguise of reducing server loads (i.e., it allows them to increase the number of websites on their servers, thus more profits).

WP Engine responded to Beanstalk who raised this issue with them. Here is the response from WPengine VP of Web Strategy, David Vogelpohl:

As you point out, WP Engine has a default setting which redirects bots from certain deep pages such as those past the ninth page in pagination. This is done in order to optimize for performance, which you can read more about here.

To be clear, WP Engine does not redirect bots for your entire site, but just those pages listed in the article I linked above.

In general, WP Engine customers’ sites index easily and rank quite well in search engines (including all of WP Engine’s own websites, which use the bot redirects). However, as a member of the SEO community, I personally appreciate the concerns around bot redirects as it relates to Google bot. I also understand that for posts with no other internal linking, pagination links can often be the only internal links pointing to that content, and turning off bot redirects may be a site owner’s preferred approach.

We make it easy for WP Engine customers to opt out of the bot redirects simply by contacting support through the WP Engine portal. Additionally, we are working on providing more visibility into bot redirects and creating self-serve options for customers, themselves, to turn bot redirects on and off within the WP Engine portal.

The comment “In general, WP Engine customers’ sites index easily and rank quite well in search engines” does not impress me at all. It demonstrates a distinct lack of understanding of the importance of correct technical SEO.

A company of WP Engine’s caliber should know better.


Google’s John Mueller has confirmed in a recent Webmaster Hangout that using UTM links on internal URLs for tracking is a bad idea.

UTM URLs are usually used for tracking social, email, and other campaigns so you can identify where your traffic is coming from and assign it to a campaign.

John Mueller says that by using UTM parameters on internal links you are telling Google to crawl many different URLs, but telling them to index another (the canonical URL referenced on those pages). While Google can probably work out which URL you want to be indexed, you are using up your crawl budget, and cause Google to determine which URL should be indexed.

John Mueller says that they will not always index the right URL:

So in practice what what would probably happen here is we would index a mix of URLs. Some of them we would index the shorter version because maybe we find other signals pointing at the shorter version as well. Some of them we probably index with the UTM version and we would try to rank them normally as the UTM version.

You can view the full discussion below:


Google Adjusts its Algorithms for YMYL queries

Google recently published a white paper that talks about how they fight disinformation across their products, including Google Search.

I have been discussing E-A-T (Expertise, Authority, and Trust) in many of my Google Update reports. It is interesting that Google chose to mention this topic in its white paper.

Google mentions that they “take additional steps to improve the trustworthiness of [their] results for contents and topics that [their] users expect [them] to handle with particular care.”

They refer to these types of pages as YMYL, or Your Money or Your Life. YMYL was introduced to Google Search in 2014, and they include “financial transaction or information pages, medical and legal information pages, as well as news articles, and public and/or official information pages that are important for having an informed citizenry.” The white paper continues:

For these “YMYL” pages, we assume that users expect us to operate with our strictest standards of trustworthiness and safety. As such, where our algorithms detect that a user’s query relates to a “YMYL” topic, we will give more weight in our ranking systems to factors like our understanding of the authoritativeness, expertise, or trustworthiness of the pages we present in response.

In the white paper, Google summarizes how their algorithms assess E-A-T:

Google’s algorithms identify signals about pages that correlate with trustworthiness and authoritativeness. The best known of these signals is PageRank, which uses links on the web to understand authoritativeness.

The summary continues on how they use Quality Raters to perform experiments and measure the quality of the search results on an ongoing basis.

I highly recommend you read the white paper in it’s entirety, as well as Google’s Search Raters Guidelines.


Search Console notifications for Drop in Traffic

Google is now sending out Search Console notifications if you experience a drop in weekly clicks. The warning was received by Vance Moore III on February 20, 2019.

The notification appears to target a specific website query, although it does provide data for both Site Clicks, and Query Clicks. Also, Google provides details on what they believe to be the cause of the drop in the query’s drop in clicks.


Google is working on ways to speed up website rendering

This relates to JavaScript Rendering, rather than websites that render well without JavaScript, such as static HTML.

For instance, at Pubcon 2018, Gary Illyes said that “Static HTML will get rendered the fastest.” He continued, “Google renders 98%+ of pages they crawl. It can be slow, it can take weeks sometimes.”

A more recent comment by John Mueller has confirmed that they are working on improving the speed at which they render websites:


Google Launches YouTube Series on JavaScript SEO

Google is launching a new series on JavaScript SEO. You can see the announcement video here.

The aim of the series is to explain how JavaScript and Google Search work together, and what that means for SEO.

The first video (the only video published so far) in the series looks at how Google crawls and indexes your web page. If your website contains JavaScript it will need to go through a rendering stage, which is quite “expensive”. The rendering stage is done at a later time.

While your page may be indexed quickly, not all the information on the page may be taken into account until it is rendered. Google says that rendering is done on a “best effort” and they cannot make any guarantees.

If you use JavaScript on your website then following [this YouTube series](https://www.youtube.com/playlist?list=PLKoqnv2vTMUPOalM1zuWDP9OQl851WMM is highly recommended.

You can watch the first video below:


Google Updates targets categories of queries, not categories of industries

If you have ever read my Google Update reports you will see that I usually comment on the type of industries affected. In particular, I like to see whether industries typically targeted by E-A-T are affected.

John Mueller, in a recent hangout, commented that Google Updates target categories of queries, not industries:

So from our point of view it’s usually not the case that we would say we need to do something specific to make the search results better for one particular industry but rather we look at it I had the other way around and try to think about ways that we can improve search results with regards to relevance for users with specific types of queries.

Mueller continued on to say that the two things are related:

If we see for example that people are getting confusing information for medical queries and maybe we need to improve how we recognize the relevance of search results for medical queries. And it’s not so much that we would target the medical industry and say like we need to improve that the way that these particular 10 sites are shown in the search results. But more that we see users are confused with this type of query, it’s something that’s confusing a lot of people, we need to find a way to improve the relevance and quality of for those results for those queries.

I’ve already noted above that Google will change its algorithms for YMYL queries, and that Page Rank is an important factor for E-A-T.

For example, I mentioned in the last Newsletter how E-A-T metrics now transfer through links. It is not difficult to imagine why Google gives more weight to backlinks coming from trusted, authoritative domains.


Google vs Rand Fishkin - CTR as a Ranking Factor

In my last newsletter, Gary Illyes had some harsh words to say about Rand Fiskin (creator of Moz):

Dwell time, CTR, whatever Fishkin’s new theory is, those are generally made up crap. Search is much more simple than people think.

Well, the battle heated up following the AMA when Britney Muller pointed out a paragraph from Google’s Cloud Speech-to-Text API page that said:

For example, when you click a link in Google Search, Google considers your click when ranking that search result in future queries.

Rand Fishkin responded:

I do not believe Fishkin is correct here. I believe this relates to personal searches when you are logged in to your Google Account, not CTR.

It is certainly possible that Google uses CTR for its evaluation or testing stage, just like it does with its Search Raters. It would be too easy to manipulate if it was an actualy Google Ranking Factor (think mass bots pretending to view a page).

I think I will side with Google on this one.


In a recent Google Hangout, John Mueller was asked about whether to disavow a domain that is sending thousands or millions or spammy off-topic links to your site.

While John Mueller confirmed that disvowing the whole domain is what the domain entry in the disavow file is for. He further said:

What happens in cases like that where you this completely random site linking is that the site probably got hacked and for whether reason someone decided to drop all those links there when hacking the website.

Usually we are pretty good at picking that up and trying to keep the state from before they hack in our systems so probably we’re already ignoring those links.

You can view the whole conversation below:


301 and 302 redirects are similar for Google Ranking purposes

In a recent Google Hangout there was an interesting discussion as to the differences between 301 and 302 redirects.

A 301 redirect indicates to Google that the redirect is permanent. A 302 redirect, on the other hand, indicates that it is temporary. Here is what John Mueller had to say:

From a practical point of view there are two things that kind of play into here. […]

On the one hand we try to differentiate between whether we should be indexing the content using the originating URL so in this case the search query or should we be indexing the content with the desination URL which might be /videogamesxbox/.

The 301 and 302 help us to make that decision. So the 301 tells us you prefer the destination page, and a 302 tells us you prefer the originating URL. […] We find in practice really messy people do things in really wierd ways across the web and we have to still try to figure out what it is that they actually meant.

If we see a 302 redirect in place for a longer period of time then probably we’re going to assume that this is not a temporary thing but this is something that’s more of a permanent thing and we’ll start treating that like a permanent change.

The useful part here is that it doesn’t really matter which of these URLs is actually picked for indexing because we rank the page in exactly the same way. […] I would really focus more on which of these redirects is the right one to do for this situation and not worry about the SEO aspect because from an SEO point of view it’s really more which of these URLs do we show in search.


Migrating to a domain with a bad history can cause issues

In a recent Google Hangout, at time 20:45, John Mueller confirmed that if you migrate to a domain with a bad history it can take time for Google to recognize that it is a completely new website.

In particular, if the domain that you’re moving to has a weird old history associated with it that may take a bit of time to kind of clear out for us to recognize that actually this new website is not related to the old one and we should treat this as kind of a new situation and not take the old situation into account.


Google: Don’t stuff content at the end of your Product Listing Pages

In a recent Google Hangout, John Mueller was asked the following question:

Many e-commerce websites optimize their categories by adding a big chunk of text under the product listings. Nothing except an h1 heading above the fold. I don’t consider this good usability since users need to scroll all the way down to read this. Does Google treat this content the same as any other or would you for improving rankings recommend putting the category text above the fold?

John Mueller replied saying that this was essentially keyword stuffing:

One of the reasons why websites initially started kind of doing this this kind of workaround is that it was really hard sometimes for us to rank category pages on ecommerce sites if there’s no useful information on that page, if there’s no context on that page. So as a workaround people started stuffing whole Wikipedia articles above below the fold using a small font, sometimes using a link like that says like more information that pops out at this giant article of text.

So from our point of view that’s essentially keyword stuffing. So that’s something worth which I would try to avoid.

I’d try to stick to really informative content and put that in place where you think that users will be able to see it. Especially if it is content that you want to provide for users. And more than that I would think about what you can do to make those pages rank well without having to put a giant paragraph of content below the page.

So things you could do here is kind of make sure that those pages are well integrated with your website so that we have clear context of how those pages should belong the website and what those pages are about. And another thing you can do is when you have that listing of products, make sure that there’s some information on that on those listings that we can understand what this page is about.

So instead of just listing I don’t know 40 photos of your product, put some text on there. Make sure that you have alt text for the images, that you have captions below their images.

So that when we look at this page we understand oh there’s this big heading on top that’s telling us like this is the type of product that you have on your website and there’s lots of product information in those listings and we can follow those listings to do even more information so that you don’t need to put this giant block of text on the bottom.

Obviously having some amount of text makes sense. So maybe shifting that giant block of text into maybe one or two sentences that you place above the fold below the heading is a good approach here because it also gives users a little bit more information about what they should expect on this page. So that’s that’s kind of the direction I would head there.

I’d really try to avoid the situation where you’re kind of fudging a page by putting tons of text on the bottom of the page just because of the rest of the page is suboptimal and instead try to find ways to improve the page overall so that you don’t have to go this workaround.

Re-designing your category pages might be expensive; developers do not come cheap. Coming up with a sophisticated design that makes important content prominent, while adding “enough” content to rank for your chosen keywords is also not easy.

While I think John Mueller is correct, if done carefully, I do not believe that adding supplemental information at the end of category pages is wrong. As long as it is high quality, relevant, and adds useful information to the page, then I do think it has its place.

If you are just going to add spammy text that the user will not read, and you know the user would not want to read it, then I think Mueller’s comments are valid.

In other words, only add information to a page that is actually useful and intended to be read by the user. If you are just adding text for the search engines, then I suggest you re-examine your SEO strategy.


Google launches Domain Property in the Search Console

Google has officially launched domain-wide data in the Search console.

Prior to this launch, webmasters would have to verify all versions of their domain (http, https, www, and non-www) in order view all aspects of the website in the Search Console.

Now, with the launch of the new domain properties, users can very and see data for the domain as a whole. This includes all versions of the domain, including sub-domains.

You can create a Domain Property for your domain now, or alternatively, Google will be doing this automatically over the next few weeks.

You can see a screenshot of the new Domain Property setup notification below:

Google launches Domain Property in the Search Console.
Google launches Domain Property in the Search Console. © Google

Should you divide large pieces of content into multiple pages?

In a recent Google Hangout, John Mueller was asked whether it is better to split a big 10,000 word piece of content into multiple pages or chapters for SEO purposes.

Here is Mueller’s reply:

Unfortunately, I think the answer here is it also depends in that sometimes people are looking for one big comprehensive piece of content and sometimes people are looking for individual pieces of content.

So I don’t know if it would make sense to always go into the combined route or the split route.

What I’ve noticed in working with our tech writers is that sometimes content performs in ways that you don’t expect, and it’s worth testing it to see how it works well for users kind of trying to figure out are people actually going through that content and getting something useful out of it.

Essentially, what Mueller is suggesting is to A/B test it, and figure out the best way of presenting the content that is the most beneficial for users.

The full conversation is quite interesting, so worth viewing:

Web Development News

Google launches revamped “Test My Site” tool

Google has recently launched a revamped version of its “Test My Site” tool that helps benchmark the speed of your website on mobile devices on a 4G network.

With the new Test My Site businesses can now see:

  • The speed of both their entire site and of individual pages
  • Whether their site/page speed is faster or slower compared to the prior month
  • Whether their site speed/page speed ranks Fast, Average, or Slow
  • How their site speed compares to others in the industry
  • The potential impact of site speed on revenue
  • A detailed list of recommended fixes to increase speed on up to 5 pages on their site
  • A complete report to share with their team

I think webmasters will find it useful, and in particular, the ability for business owners to share a PDF report with their developers is useful.

I’ve tested the tool, and I must say that it feels inferior to their developer tool. Their developer tool at web.dev seems to provide slightly different speed test results as well, which is rather odd. For example, this site performs worse in their Test My Site tool (probably due to the JavaScript execution time on a Mobile (limited CPU).)

As a developer I feel that web.dev provides a lot more detailed information as you can see below:

Web.dev Google Audit Tool.
Web.dev Google Audit Tool. © Google

In addition, web.dev also provides information on Best Practices, Accessibility (i.e., making sure text colors are readable), Progressive Web Applications, and SEO.

If you haven’t used web.dev, I highly recommend you check it out.


WordPress 5.1 Released

WordPress 5.1 (named “Betty” in honor of jazz vocalist Betty Carter) has now been released, bringing with it a couple of notable additions:

  • Site Health Features - WordPress will now start showing notices to administrators that use long out-dated versions of PHP. WordPress will also check newly installed plugins against the version of PHP you are running and inform you if it does not work with your site.
  • Editor Performance improvements - WordPress continues to make improvements to the performance of the new Block editor. With this update, the editor should feel a little faster to start, and typing should feel smoother.

Chrome’s new Scroll to Text Feature

Chrome will soon let you share a link to a specific word or sentence on a page.

You can currently test the new feature on their Canary nightly developer version of the Chrome browser (it will install a new browser, not overwrite your existing one).

The functionality will work on all browsers using Chromium, which will include Microsofts new Edge Browser which is being rebuilt on Chromium (not yet released). It won’t work on Firefox or Safari, which may deter wider use. I suspect it will be added to other browsers to, though, in due course.

The existing method for sharing a link to a specific point on a page is to use anchor links. I use anchor links for the table of contents on this page.

The new Scroll to Text feature will allow you to highlight a specific word or phrase, for example:

www.example.com#targetText=alpha%20beta,psi%20omega

Will scroll and highlight a block of text starting with “alpha beta” and ending with “psi omega”.

This will be extremely useful for referencing specific quotes in web pages and sharing on social media.


I’ve recently installed Instant Page, a just-in-time preloading library that uses “prefetch” to preload a page just before a user clicks on a link leading to it.

It works by prefetching the page when you hover over a link just before you click it. On average, a user will hover over the link for 300ms before clicking. This is plenty of time to preload the page before clicking, making the page load appear to be instant.

Instant Page is currently installed on this website, so take a look around and see how quick everything loads.

To install instant page, you just need to add the following code snippet before the </body> tag:

<script src="//instant.page/1.2.1" type="module" integrity="sha384-/IkE5iZAM/RxPto8B0nvKlMzIyCWtYocF01PbGGp1qElJuxv9J4whdWBRtzZltWn"></script>

If you use WordPress, you can install Instant Page with this plugin.

An alternative to Instant Page is Google’s Quicklink. This works slightly differently, in that it preloads all links in the viewport, rather than just the link you hover over. It has various checks to ensure it doesn’t activate on mobiles using data saver or which are on slow connections.

If you want to aggressively prime your cache (especially if you use CloudFlare Page Rules for full page caching), then this might increase your cache rate as well as preload pages. I will probably switch from Instant Page to Quick Links in the coming days. There is also a WordPress plugin for Quicklink, which you can find here.

I highly recommend you install one of the two options above. They both work exceptionally well.


Bootstrap 4.3 released, includes Responsive Font Sizes

Bootstrap 4.3 has been released with other 120 combined closed issues and merged pull requests. Changes include improvements to Bootstrap’s utilities, preparation for version 5, and various bug fixes.

The key changes include:

  • New: Added .stretched-link utility to make any anchor the size of it’s nearest position: relative parent, perfect for entirely clickable cards!
  • New: Added .text-break utility for applying word-break: break-word
  • New: Added .rounded-sm and .rounded-lg for small and large border-radius.
  • New: Added .modal-dialog-scrollable modifier class for scrolling content within a modal.
  • New: Added responsive .list-group-horizontal modifier classes for displaying list groups as a horizontal row.
  • Improved: Reduced our compiled CSS by using null for variables that by default inherit their values from other elements (e.g., $headings-color was inherit and is now null until you modifier it in your custom CSS).
  • Improved: Badge focus styles now match their background-color like our buttons.
  • Fixed: Silenced bad selectors in our JS plugins for the href HTML attribute to avoid JavaScript errors. Please try to use valid selectors or the data-target HTML attribute/target option where available.
  • Fixed: Reverted v4.2.1’s change to the breakpoint and grid container Sass maps that blocked folks from upgrading when modifying those default variables.
  • Fixed: Restored white-space: nowrap to .dropdown-toggle (before v4.2.1 it was on all .btns) so carets don’t wrap to new lines.
  • Deprecated: img-retina, invisible, float, and size mixins are now deprecated and will be removed in v5.

One of the major changes is the introduction of Responsive Font Sizes. You can see a demonstration below:

Responsive Font Size. © Bootstrap

At the moment, as a Boostrap user, I have to use media queries to automatically adjust the font size depending on the screen size. The responsive font size is not enabled by default. To enable responsive font sizes you will need to use the SASS version, and enable the $enable-responsive-font-sizes variable to true. Full instructions can be found here.

It works as follows:

  • Font sizes will rescale for every screen or device, this prevents long words from being chopped off the viewport on small devices.
  • It will prevent the font size from rescaling too small so readability can be assured.
  • It can be implemented just by using the font-size mixin (or responsive-font-size property for PostCSS) instead of the font-size property.
  • The font sizes of all text elements will always remain in relation with each other.

I’ll be implementing the responsive font sizes on this website in the next few days / by next week.

Patch - version 4.3.1

If you have already upgraded to 4.3, you should ensure you upgrade to 4.3.1 to fix an XXS vulnerability. The patch also includes an update to the responsive font sizes introduced with version 4.3.


Bootstrap 5 to use Native JavaScript

There is currently a big push in the developer community to speed up the web and remove bloated libraries such as jQuery from frameworks. In fact, CSS-only frameworks such as Bulma.io are gaining popularity.

You will see below a copy of this site’s web.dev report. While the performance of this is site is excellent on a desktop, it does suffer somewhat when being loaded on a mobile device. The reason is not necessarily the download speed (a typical measurement of site speed), but due to the limited CPU processing speed of the CPU on a mobile device.

Web.dev Google Audit Tool.
Web.dev Google Audit Tool. © Google

As you can see, the “First CPU Idle” could be a lot better. This is because I use the Bootstrap framework to build this site which includes the jQuery library for some of its functionality.

Thankfully, version 5 of Bootstrap will be rewriting most of that functionality to use native JavaScript. This is a move that should significantly improve the performance of this website. This will be a significant step towards moving bloat.

Major progress has already been made for Version 5, so hopefully, it won’t be too long to wait until it is released.

Thankfully, version 5 should not feature too many changes to the codebase, so it should be relatively simple to update existing sites (although some breaking changes might occur):

Bootstrap 5 will not feature drastic changes to the codebase. While I tweeted about the earnestness to move to PostCSS years ago, we’ll still be on Sass for v5. Instead, we’ll focus our efforts on removing cruft, improving existing components, and dropping old browsers and our jQuery dependency. There are also some updates to our v4.x components we cannot make without causing breaking changes, so v5 feels like it’s coming at the right time for us.


Bootstrap docs are moving to Hugo

The Bootstrap documents are currently powered by Jekyll, a static site generator. Static site generators produce flat HTML files, which are then uploaded to the web server as static HTML.

While using a static website might seem basic, you couldn’t be more wrong. Today’s static site generators are incredibly powerful and use Templates, Markdown files, and a wide array of data sources to produce very complex websites.

Bootstrap has announced they are moving to Hugo, the same static generator I use to build this site (thewebmaster.com). Hugo is the fastest static site generator in existence, thanks to it using the new programming language by Google; Go. I am a big fan of this move.

Google to revise ad blocker-killing Chromium proposal following backlash

I reported in Episode 2 of my Newsletter at the end of January that Google was considering changes to Chrome that would effectively destroy ad blocking and privacy protection.

Following a widespread backlash against the proposals, Chromium engineer Devlin Cronin said they will review the proposals:

I’d like to reiterate that all of these changes are still in the draft and design stage, as explicitly called out in the document and the tracking bug. The declarativeNetRequest API is still being expanded and is under active development, and the exact changes that will be implemented as part of Manifest V3 are not finalized. Feedback during this time is crucial, and we absolutely want to hear your comments and concerns.

Another clarification is that the webRequest API is not going to be fully removed as part of Manifest V3. In particular, there are currently no planned changes to the observational capabilities of webRequest (i.e., anything that does not modify the request). We are also continually listening to and evaluating the feedback we’re receiving, and we are still narrowing down proposed changes to the webRequest API.

While it is too soon to rejoice at the change of heart, it is an important signal that they are listening. I will keep you updated as to any further developments.


Web Hosting News

.Dev domains are now available in Public Release

One of the hottest new TLD’s this year is “.Dev”. The original announcement post by Google (the owner of the .dev TLD) is promoting the domain to developers and technology for them to showcase their communities, tech, and projects.

In a similar way to .app and .page domains, the .dev domain will be secure by default. This means that it will only work when secured by an SSL certificate, as it has HSTS (HTTP Strict Transport Security) enabled by default.

.dev domains are available at all popular domain name registries. The pricing for some of the most popular registries is below:

  • Namecheap - $14.98 (renews at $16.98) + Free privacy
  • GoDaddy - $13.99 (renews at $19.99) + Free privacy
  • Name.com - $14.99 + $4.99 for WHOis Privacy

I highly recommend Namecheap as a Registrar. It’s who I use, and is one of the best and cheapest registrars around.

I picked up the new personal domain “jonathangriffin.dev” at Namecheap as soon as it became available. I’ll probably put a personal portfolio or resume on it when I get some spare time.


GoDaddy reports impressive End of Year results

GoDaddy has reached a milestone of $3 billion worth of bookings in 2018, along with strong growth figures. You can view the full end of year results here.

GoDaddy Consolidated Fourth Quarter Financial Highlights.
GoDaddy Consolidated Fourth Quarter Financial Highlights. © GoDaddy

Here are the note-worthy highlights:

  • GoCentral, GoDaddy’s website builder, had a year of strong feature expansion. It evolved from a simple website builder, to a syndication platform managing customers presence across social, reputation and e-commerce.
  • GoCentral saw robust subscription growth in 2018, driven by improvements in conversion, retention and awareness. Engagement with features such as appointments, online store, and integrations with third-party platforms rose dramatically throughout 2018.
  • GoDaddy became the largest global host of paid WordPress instances and continues to invest in making WordPress simple, secure, and accessible to entrepreneurs and Web Pros alike.
  • GoDaddy continues to invest in the WordPress ecosystem through its products and contributions to the open source WordPress framework.
  • GoDaddy launched a partnership with Open-Xchange for a new branded email offering focused on emerging markets, complementing its partnership with Microsoft 365 in mature markets.

Overall, I think GoDaddy is doing extremely well with their offerings. They are not to everyone’s taste, with more competent users generally shunning them in favor of more sophisticated offerings, but they seem a great choice for small business owners with more limited knowledge.


Plesk launch new WordPress Toolkit 3.5

A new version of the Plesk WordPress Toolkit has been released. It includes 8 new security measures and a new WordPress installation experience. I suspect these changes have already been rolled out by hosting providers offering the Plesk Control Panel.

You can watch an overview of the new WordPress Toolkit below:

cPanel version 78 released

There are three main improvements with the latest cPanel release:

  • Backblaze B2 is now a Backup destination
  • The cPanel EMail authentication interface is now replaced with a new Email Deliverability Interface, that now offers suggestions on how to resolve common problems with email deliverability.
  • LiteSpeed and KernelCare services are now available from within cPanel

While it is only likely to be of interest to self-managed Cloud or VPS customers who have full access to WHM admin interface, one of the most exciting additions is the LiteSpeed and KernelCare integration:

  • An admin is now able to purchase a LiteSpeed Web Server license direct from the cPanel store, and the system will attempt to install it automatically.
  • If your system supports KernalCare, you will see options to purchase a license within the Security Advisor interface or Graceful Server Reboot interface.
Jonathan Griffin. Editor @ The Webmaster

About the author

Editor, Hosting Expert, SEO Developer, & SEO Consultant.

Jonathan is currently the Editor & CEO at The Webmaster. He is also an SEO Developer offering consultancy services, primarily to other web development companies. He specializes in the technical side of SEO, including site audits, development of SEO related features, and site structure & strategy.

In his spare time, Jonathan has a passion for learning. He regularly undertakes professional courses on subjects ranging from python, web development, digital marketing, and Advanced Google Analytics.

Read more about Jonathan Griffin on our About Page.