Wednesday, April 22, 2015

Rank Your Local Business – Helpful Introduction on Local SEO

See below chart that shows a breakdown of the weighting of various ranking factors within local SEO campaigns. One of the things to note here is that whilst there are a few slightly different factors compared to the usual SEO campaign (i.e. External Loc. Signals and My Business Signals), links and on-page SEO factors still play a huge part. The only difference is that the type of links you’ll want to focus on will be a lot different.



I’m going to talk you through some of the techniques that you can implement to get results from your local SEO campaigns.

Check below techniques that will help you to rank well in Local Search Result.

Google Local Business

Claim you Google my business page.  I’m not going to go into all the details of setting it up because there are tons of articles that explain this process.
All you need to know is that once you’ve set it up, you should include the following:
  • Add a long, unique description that’s formatted correctly and includes links.
  • Choose the correct categories for your business.
  • Upload as many photos as possible.
  • Add a local phone number to your listing.
  • Add your business address that’s consistent with that on your website and local directories.
  • Upload a high-resolution profile image and cover photo.
  • Add your opening times/days (if relevant).
  • Get real reviews from customers (I’ll come onto this).

NAP (Name, Address, Phone Number)

Consistency is key here. You need to ensure that you have your full NAP across your whole website (i.e. every page). Furthermore, you must use the exact same details/format when you mention your address on other websites (i.e. local citations).

You’ll also want to use Schema.org markup on your NAP to give the search engines all they need to display your company information correctly.

Here’s the code that you can adapt to your own website:

<div itemscope itemtype="http://schema.org/LocalBusiness">
<p itemprop="name">COMPANY NAME</p>
<p itemprop="address" itemscope itemtype="http://schema.org/PostalAddress">
<p itemprop="streetAddress">ADDRESS LINE 1</p>
<p itemprop="addressLocality">CITY</p>,
<p itemprop="addressRegion">REGION</p>
<p itemprop="postalCode">POSTCODE/ZIP</p.
<p itemprop="telephone">PHONE NUMBER</p>
<meta itemprop="latitude" content="LATITUDE" />
<meta itemprop="longitude" content="LONGITUDE" />
</div>
All you need to do is change the text in bold to your own details — simple.

Local Reviews

Local reviews have a direct impact on local search rankings, so you’ll want to spend some time acquiring them. 

For more please read full article at Local SEO...

Thursday, April 16, 2015

Google Site Url’s are Replaced with Site Name and Breadcrumb Path

As today Google announced, Url within search engine result is replacing with Site name with category & Breadcrumb. It is more simple understanding that Google is updating SERP algorithm.



Conclusion: These changes are rolling out gradually but this changes are not applicable for desktop , It is only for mobile result. For more information read full update at …Click.

Tuesday, July 1, 2014

Interview with Google’s Matt Cutts at Pubcon

I had the pleasure of sitting down with Matt Cutts, head of Google’s webspam team, for over a half hour last week at Pubcon. Please read the below discussion:
Stephan Spencer: I am with Matt Cutts here. I am Stephan Spencer, Founder and President of Netconcepts. Matt is Google engineer extraordinaire, head of the Webspam team at Google.
Matt Cutts: [laughing] Having a good time at Google, absolutely.
Stephan Spencer: Yeah. I have some questions here that I would like to ask you, Matt. Let us start with the first one: When one’s articles or product info is syndicated, is it better to have the syndicated copies linked to the original article on the author’s site, or is it just as good if it links to the home page of the author?
Matt Cutts: I would recommend the linking to the original article on the author’s site. The reason is: imagine if you have written a good article and it is so nice that you have decided to syndicate it out. Well, there is a slight chance that the syndicated article could get a few links as well, and could get some PageRank. And so, whenever Google bot or Google’s crawl and indexing system see two copies of that article, a lot of the times it helps to know which one came first; which one has higher PageRank.
So if the syndicated article has a link to the original source of that article, then it is pretty much guaranteed the original home of that article will always have the higher PageRank, compared to all the syndicated copies. And that just makes it that much easier for us to do duplicate content detection and say: “You know what, this is the original article; this is the good one, so go with that.”
Stephan Spencer: OK great. Thank you.
The way of detecting supplemental pages through site:abc.com and the three asterisks minus some gobbly-gook, no longer works – that was a loophole which was closed shortly after SMX advanced and after I mentioned it in my session. Now that it no longer works, is there another way to identify supplemental pages? Is there some sort of way to gauge the health of your site in terms of: “this is main index worthy” versus “nah, this is supplemental”?
Matt Cutts: I think there are one or two sort of undocumented ways, but we do not really talk about them. We are not on a quest to close down every single one that we know of. It is more like: whenever that happens, it is a bug to have our supplemental index treated very differently from the main index.
So we took away the “Supplemental Result” label, because we did not consider it as useful for regular users – and regular users were the ones who were using it. Any feature on Google search result page has to justify itself in terms of click-through or the number of pixels that are used versus the bang for the buck.
And the feedback we were getting from users was, that they did not know what it was and did not really care. The supplemental results, which started out as sometimes being a little out of date, have gotten fresher and fresher and fresher. And at least at one data center – hopefully at more in the future, were already doing those queries on the supplemental result or the supplemental index, for every single query, 100 percent of the time.
So it used to be the case that some small percentage of the time, we would say: oh, this is an arcane query – let’s go and we will do this query even on the supplemental index. And now we are moving to a world where we are basically doing that 100 percent of the time.
As the supplemental results became more and more like the main index, we said: this tag or label is not as useful as it used to be. So, even though there are probably a few ways to do it and we are not actively working to shut those down, we are not actively encouraging people and giving them tips on how to monitor that.
Stephan Spencer: OK.

Tuesday, August 28, 2012

An Unique Application on YouTube - "Barfi" Movie Promotion

Great News for Ranbir Kapoor's Fans

Bollywood movie Barfi is an upcoming Indian romantic comedy. The movie has been directed by Anurag Basu and has Ranbir Kapoor, Priyanka Chopra and Ileana D’Cruz playing as the lead roles in the movie. It is expected to be released on 14th September, 2012 and is all set for creating promotional buzz about the movie.

How is Barfi promoting on YouTube?

UTV Motion Pictures, producers of the movie have chosen a funny and interactive way to create awareness. The makers of the movie have opted for YouTube as the network for creating the online buzz. A dedicate tab has been created – ‘Barfi’ and the tab has all the excitement.

Once you are on the tab, the video that plays has Ranbir Kapoor himself introducing the main lead – Barfi, the filmy character to you. Ranbir goes on sharing some funny bits about Barfi so that fans get to know him. Once that is over, the video turns into an interactive fun-loving app. As a user, you can now get interactive with Barfi. Either you can ask Barfi to show how to impress girls or you can change his mood and Barfi will enact that for you.

Lets say if you want to see how Barfi can dance to impress a girl or how would he flirt then you need to type that and Barfi will do it for you. Similarly, you can change Barfi’s mood by typing. So you can type nerdy, excited, angry, etc. and Barfi will show the different moods in his style. But make sure that you type and then only Barfi gets to know what you want from him. And don’t act smart by typing anything which doesn’t make sense as Barfi is smart and will make a face and walk away!

Is it innovative?

Definitely this is innovative and an engaging idea from the makers of Barfi. The interactive app is not only interesting but designed perfectly. For example, the character Barfi in the movie is a deaf-mute person so the app only works when you type. Additionally, you have the original character Ranbir guiding you on how one should use the app.

The app scores full marks on the design aspect. The app provides a set of words that you can type and Barfi will enact that for you. These are small but significant features and help bring the user one step closer to the online engagement.

Source : http://lighthouseinsights.in/utv-pictures-promotes-barfi-movie-on-youtube.html

Wednesday, August 22, 2012

Robots.txt “Disallow” and “No Index” Meta Tag - Explanation & Differences

If you are an SEO or are familiar with search engine optimization, the terms “Robots.txt” and “No Index” are somewhere in your vocabulary. If not, the explanation of these is fairly simple: both Robots.txt files and “No Index” meta tags are ways to keep search engines from reading and saving content to their database, known as their “index.” Why would you want to exclude pages from a search engine’s index? Another simple answer: To keep the engines from giving priority to unimportant pages at the cost of the good (i.e. converting) ones. So, let’s get into how the Robots.txt file and “No Index” meta tag operate.

The Robots.txt “Disallow”

Robots.txt is a file that you upload to your site’s root directory. Is it located at http://www.YourSite.com/robots.txt. In this file there are directions for search engines. When the file has a directive that says “Disallow” relating to a certain page, the search engine knows not to read that page. By telling a search engine not to read a page, you are giving it a signal that the page is not important and they will skip it. And for the most part, this will ensure that disallowed pages do not show up in search results.

However, “Disallow” means “do not read”, not “do not see.” Disallowing does not make pages invisible; it makes them not crawlable. If inbound links or citations exist to a disallowed page, search engines will still be aware of the disallowed page’s existence. It will simply be unaware of the content of the page. And, in the rare case someone does a search and there are no better results, a search engine will serve up a link to a disallowed page. The link will just be presented without a description.

(Also, as a side note, some smaller search engines don’t use the Robots.txt file. Therefore, disallowed pages will be crawled and indexed by them. )

The “No Index” Meta Tag

The “No Index” meta tag is piece of code that you put in the head section of your website. Unlike a “Disallow”, the “No Index” tag allows a search engine to read and see the page, but states explicitly that an engine should forget it ever saw the page once it is left. This instruction then also applies to any links and citations pointing to a “No Index” page: forget they exist. Thus, the “No Index” meta tag prevents any occurrence of the page from being present in all indices in any form. Additionally, all search engines follow the “No Index” meta tag.

When to Use Robots.txt “Disallow” and When to Use Meta “No Index” Tag

In my opinion, the “No Index” tag is a more secure way of keeping pages out of an index. However, this method can also be harder to manage and keep track of since it’s applied on a page-per-page basis. The Robot.txt “Disallow,” on the other hand, is simpler to manage since it is one single file.

Every business should assess its own web needs, but for simplicity’s sake, the “No Index” meta is best used on pages you need 100% no index security on or are creating in secret from your competition. In all other cases, Robots.txt “Disallow” will do.

Does Robots.txt “Disallow” and “No Index” Meta Tag Consume Page Rank?

There has been much discussion over whether and how disallowed and “No Index’ed” pages consume Page Rank (“PR”). Here is the answer: if you disallow a page but leave the incoming links to a disallowed page “do follow”, the disallowed page will still consume rank. And, because the page is disallowed and its outbound links are not read by engines, PR is wasted since it cannot be passed on. However, “No Index” can pass PR on if the links on the page are “do follow,” since an engine reads a “No Index” page but simply does not index it.

Source-http://blog.beacontechnologies.com/robots-txt-%E2%80%9Cdisallow%E2%80%9D-and-%E2%80%9Cno-index%E2%80%9D-meta-tag-what-%E2%80%98s-the-difference/