Saturday, May 26, 2012

@Avinash And +Think With Google Team On Shouting vs.Conversation vs."Utility" Marketing

Here's a video of a recent hangout that @avinash did with +Think with Google team. The topics discussed covered were:

  • Why is brand destruction so easy today?
  •  Examples (From Seventh Generation and BMW - good - to Gatorade, ATT, and so many more sub optimal ones).
  • What kind of an organization structure do you need to have an optimal brand experience on social media for your audience?
  • Shouting vs. Conversation vs. "Utility" Marketing.

And so much more....

The Key Takeaways From The Hangout:

· The distance between its company and its brand getting destroyed is 2 pixels.

· Focus on sharing knowledge and information on social media which will add value to the lives of the people

· Understand the power of the medium

· Until now the brands have used the broadcast aspect of social and not the engagement aspect and when they fail to understand the engagement aspect it turns out to be an amplification nightmare

· It is like waking up every morning and walking the talk

· Every employee is a brand ambassador and the social media reflects the culture of the company

· In order to have control on the social media sharing on behalf of the company there can be a rule book or broad guidelines given to the employees who share on behalf of the company but have faith and assume intelligence on the other end.

· The rule book and guidelines to be broad parameters rather that the strict dos and donts.

· Review these parameters every 6 months and offer training to make the social engagement and amplification beneficial for the company

· Doing social effectively takes work and a shift in mind set and a new culture

· Companies using social effectively Cadbury’s, Esquire Magazine, BMW.

· People those who are going to get into social now are going to get a great advantage in future

Related Posts:

Enhanced by Zemanta

Saturday, May 19, 2012

SEO Action Plan For The Knowledge Based Semantic Web

knowledge remix blog as a graph
knowledge remix blog as a graph (Photo credit: g.janssen)

The Wall Street Journal Post sometime back mentioned in the post that ….” Google is aiming to provide more relevant results by incorporating technology called "semantic search," which refers to the process of understanding the actual meaning of words.” Related to that we had posted our views on .

After the Panda and Penguin updates, Google on 16th May 2012 announced its Knowledge Graph on a blogpost titled , “Introducing the Knowledge Graph: things, not strings”.

In this article it is explained that :

The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query. This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.

Google’s Knowledge Graph isn’t just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook. It’s also augmented at a much larger scale—because we’re focused on comprehensive breadth and depth.

This is the first step to try to convert an information based engine to a knowledge based engine. This kind of engine is solely based on data and data relationships. Whatever useful information people are searching for offers valuable data for the knowledge graph which can be helpful to others searching on the same topic later.
The search queries help Google to add and correlate data thereby enriching the knowledge based search engine results. The information based search engine takes the string of words from the search query and maps it with the content indexed in the database and displays the sites responding strongly to the ranking factors in the algorithms in the SERPs.

In case of knowledge based search engine an additional dimension of the knowledge factor is added which is determined by the correlation, the relationship and how the content of the site is inter-linked with other entities on the web. This is one way the Knowledge Graph makes Google Search more intelligent. The results are more relevant because the search engine understands these entities, and the nuances in their meaning, the way you do. This makes the search engine think more like the user.

Google says…

“We’ve always believed that the perfect search engine should understand exactly what you mean and give you back exactly what you want. And we can now sometimes help answer your next question before you’ve asked it, because the facts we show are informed by what other people have searched for. “

How does one optimize a site for such a knowledge based search engine?

· Have loads of relevant knowledge rich content in all forms on your site.
· Share that content all over the web so that it gets correlated to the relevant topics and the relevant industry
· Every piece of content on the web is data . How well this data gets indexed and how well It get interlinked is important.
· Represent the following content on the site as microformats or as data :
Businesses and organizations

To check your markup, use the rich snippets testing tool.

· Aim at establishing an authority and brand on social media
· Participate in discussions to put forward your opinion
· Have an active Google+ business page presence
· Use Google+ as a blogging platform
· Do not focus on Guest blogging and blog commenting for link building but post comments and guest blog for sharing your knowledge
· Forget about keywords and focus on the keyness
· Implement the hCard Code for the address on the contact page
· Do not neglect the On Page Optimization as the knowledge quotient will be applicable only if the relevant pages respond to the basic ranking factors.
· Keep improving the Technical aspect of SEO in response to the data received from Webmaster Tools

More relevant information on the following links:

Enhanced by Zemanta

Tuesday, May 15, 2012

The Penguin Update And The SEO Ideology And Process

Penguins in bronze
Penguins in bronze (Photo credit: LodewijkB)
We all know that Google is all out to fight spam and is aggressively working for improving the quality of search results like never before. Since, Feb. 2011 we have had a series of Panda updates and now the Penguin Update is the latest antidote added to the search algorithm for
devaluing the sites which adopt manipulative methods and spam inbound links to deceive Google to get high rankings.

In simple words I can say that Google is penalizing sites which have broken their rules and guidelines. Google has always considered SEO as a positive and constructive action for achieving good search visibility. According to Google SEO IS NOT SPAM . But, there is a very thin line between SEO which is positive and constructive and SEO which uses manipulative methods to achieve the desired result. It is usually referred to as the white hat v/s black hat SEO strategies.

White Hat Methods for SEO are very simple and straight forward, Black Hat techniques on the other hand are, the ones which are complicated and understood only by people who adopt these techniques as they are not the standard norm.

Post Penguin update the SEO blogosphere is filled with posts on Over Optimization, Link Pruning, Natural way of acquiring links, etc.

But, first of all ,'To Optimize' means to achieve a perfect balance – nothing more and nothing less. According to the Merriam-Webster dictionary, to optimize is to make as perfect, effective, or functional as possible. In engineering, optimization is a collection of methods and techniques to design and make use of engineering systems as perfectly as possible with respect to specific parameters. So, if you have been optimizing sites as per the Google guidelines and have been adopting the right white hat off page and on page techniques then I don’t think this kind of an update should worry you as an SEO and let all the pandas and the penguins play to the glory of Google.
The Google Peng
Antarctic: Signy Island - Adelie penguins (Photo credit: ¡WOUW!)

Google is becoming more and more efficient in detecting low quality content , duplicate content , innumerable low quality inbound links acquired with unfair means, doorway pages, keyword stuffing and non-reasonable use of footer links with anchor text stuffed with keywords to manipulate and misguide Googlebot. But, again if you have not been doing this in the name of SEO then there is no need to worry.

The Penguin Update if there is any (or just a false alarm by Google to instill fear and initiate self correction) then it is basically making the SEOs who have been following the above methods go crazy as it is affecting their guilt conscience and making them go for a rectification spree else I think the others do not have to bother at all. We have not made any changes on the sites managed by us and practically 90% of our sites have shown an increase in targeted traffic and improved rankings from 29th April 2012.

I believe these series of the Panda and Penguin updates will still take atleast another 6 more months to effectively churn out stable quality results and penalize sites using manipulative methods to deceive the Googlebot.

Many people have been writing about how SEO should be an invisible layer which should enhance the search visibility of the site but should not show on the site. I think SEO is that technical layer which is applied on the site to make the search engine bots read and index what your visitors are reading. Your site communicates with the search engine bots not only via the HTML tags in the <Head> and <Body> section but via other files too like the robots.txt, sitemap.xml, .htaccess, HTTP headers, etc. and the medium of instruction used is the search engine web master tools available for that. Hence, the off page optimization also includes how well you communicate with the search engine via these tools. Even the analytics used by the search engine to give you reports of the progress of your site gives a lot of information to the search engine regarding some quality metrics. As in the long run once the on page optimization is set up and if you focus on the natural way of building off-page SEO, you will see that the better your site performs and the online sales, likes, +1s and links from your site get shared on social media the site gets an automatic momentum as all these metrics and signals boost the potential energy of the site which in a way gets reflected in the kinetic energy flow in SERPs.

The initial on page and other SEO techniques give the initial push to the site but for the site to go on doing well, the users have to show that they like to be on the site and complete the call to action of the pages on the site. For instance if it is an ecommerce site then the online sales show and increase and there are more and more repeat visitors buying online along with the new visitors then for sure your site is going to get a boost from the search engines as this proves that if the visitors can trust you with their credit cards then the trust factor of the site is high.

I think if we optimize the site and update the site with great content continuously and share it on the web and make it capable of being shared further by others then the SEO is like the initial push that a body in the state of rest gets and keeps on going ahead unless it is faced by the unbalanced force. The Panda and the Penguin updates are that unbalanced force which are applied by Google to stop the momentum and motion of the sites using unfair means and not to the sites which are optimized according to their guidelines.

You have to learn the rules of the game. And then you have to play better than anyone else. Albert Einstein 

Enhanced by Zemanta

Friday, May 11, 2012

Rewind To Review Your Past Voice With Your Present Say…. Is Comment Archiving A Good Idea?

I am sure we all in the web arena have posted comments on blogs, written blog posts or shared and had conversations on social media sites either at a personal level or at a professional level. Lately, I have been in the retro mood since I started working on server logs for SEO analysis.

I think it is a good thing to rewind the tape of life sometimes and get some insight about our past actions and clear our mind to plan for the future. Hence I decided to focus on the foot prints created by WebPro Technologies by way of blog commenting in the past. This is not a client report so there are no metrics to monitor but a true self analysis to be done to know what was discussed and what was quoted by us on other blogs and social media sites and how much has been adhered to, by us . This also gives us a chance to analyze what our views had been in the past as compared to the present changing scenario.

Post Panda and Penguin updates there are umpteen no. of posts on natural links and link pruning but we have always been of the opinion that the term “Link Buiding” itself is wrong you do not build natural links, they get built in the process of the quality footprints you make on the sands of the web during your web journey.

One of the ways to judge the knowledge and ideologies of an SEO company is to read about what they have written in the past by way of comments, blog posts, social media conversations, reviews and on forum discussions about various topics and issues and see if they still have the consistency in what they say and how those opinions in the past have shaped up in the real world as of today.

After all the words written on the canvas of the web on various platforms are not mere words but content in different forms around which the whole web world revolves. Authentic, valuable content should stand the test of time and add value to the authority factor of the author. I have been commenting on and discussing many topics related to the search industry and have also been sharing links related to posts written by me if there was a strong correlation with the blog topic and never bothered if there was a 'no follow' attribute on the comment links or if I got any amount of thumbs down for it. As my main purpose was to put forward an opinion which I strongly believed in or felt about.

The past content posted by a person can say a lot about the knowledge, beliefs and long term goals about that persona. This can reflect the solidarity of the viewpoints made by that person and if the present scenario can tell us if they have adhered to that and stood by what they had said and also proves that if they have any coherence in what they say and what they do. This kind of check can also make people think before they post and becomes a self check for ensuring that quality content is published on the web.

Our business associate @wasimalrayes suggested that this could be a comment archive which can be added to the site and updated with every comment made on the web. I think it is a good idea which not only adds your web voice to your site but also acts as a self check tool for responsible content addition to the WWW.

We all have a blog archive, why not a coment archive too as after all comments also are mini blog posts posted by us on the web which reflect our opinion and perspective regarding that relevant topic?

What do you think about the ‘Comment Archive ‘ section being added to the site?

We would like to share some of the past comments , blog posts and social media conversations we have had on the web regarding various topics. Since the blogosphere is brimming with the posts regarding the Penguin and the Panda updates I’ll start with Link Building:

Topic Link Building :

Some Blog Comments Made By Us In The Past Reflecting Our Views On Link Building:

Our Comment 
WebProTechnologies | January 3rd, 2012

All the predictions for this year are spot on. I agree to all the points predicted.

Regarding #3 I think Google might just give us a surprise this year by giving less importance to inbound links . Only the links which will come from trusted and high authority sites and editorial links will matter and will be taken into account. The major focus will be on the social media signals which will reflect the trust and the authority factor. Hence, in what context the links are being shared on social media and the discussions and reactions surrounding it will make a big impact.

Hence, stop the link building nuisance and focus on building quality content (in all forms, images, text, video, audio, etc. and share it on social media) and let the natural links get built...

Comment :
Bharati Ahuja
February 20, 2010 at 10:24 am

Totally agree.

In My opinion everybody has just gone too far thinking only about how to get more and more links. I am sure when the PageRank concept must have been framed, the main purpose must have been to judge the true goodwill and popularity of the website in direct proportion to the no. of inbound links it has.

But with all these ethical and unethical methods of gaining more and more links the whole purpose is defeated.

If the site is having good informative content and with ethical SEO practices it ranks high in the search engines then it automatically gets a lot of links from various sources.

As the main purpose of a genuine searcher is to search for what is available globally and locally. Once the searcher finds that it surely gets added and linked by him in various ways.

Instead all the energies and efforts should be concentrated on building the website qualitatively in various ways by adding more varied content.

Don’t run after links. Let them come to your website genuinely.

Our Comment:
| May 21st, 2010

Aptly put at the very begining of this post that link building is a task which is detested by all .

I am of the opinion that the term 'link building' itself is an incorrect term. Links do not have to be built but they should get built naturally in the process as your website starts getting a wider web presence and preference.

As every link is like a vote to your site and goodwill of your company and that has to be earned as part of the web journey of the website.

If we focus on the quality content, have a good site internal linking architecture, have a site which is visitor friendly as well as robot friendly then getting high SERPs is not a difficult task.

Once you have high SERPs trust me there will loads of directories and portals adding your site in their listings even without you knowing about it, as they too are looking for quality listings.

Once upon a time the dmoz listing was something that you always wished for once you submitted your site in dmoz as that surely was a valuable link. I dont know if it still has that importance but I still manually add each site to dmoz.

Apart from a good qualitative site in all respects other genuine methods of gaining natural inbound links as your website goes from one milestone to another are as follows:

Focus all your efforts on making the site informative, qualitative and content rich to get links automatically.·

Do not neglect the On-Page Optimization Basics and just go after links. (Very important from the SEO perspective)·

Participate in social media networks for discussions and sharing of information and mention links to the relevant pages to your website. (It need not be the Home Page always)·

Have a social book marking button on your website.

Make RSS feeds available on your website.

Issue Press Releases periodically.

Our Comment:

WebProTechnologies | May 31st, 2011

Well, despite all the thumbs down my opinion still remains the same. Your quality content on your website and quality web presence on all the search options, blogs, discussions, social media, etc. will always be rewarded in an increasing manner in the long run by any search engine and will result to inbound targeted traffic.

As we do a fairly good job on SEO and rankings without focusing on link building but in the process educate and train our clients to effectively maintain their blogs, and social media accounts and in the bargain they end up getting quality links and it has worked for us.

Our Archived Blog Posts On Link Building:

Topic 2 Social And Search Integration:

Blog Article / Social Media Post
Our Comment:

Yes initially Altavista was THE SEARCH ENGINE and keyword spam was something that Google had to work on to improve the quality of search results for which they came up with the PageRank Technology to add value and quality to search results.

But as every coin has 2 sides this innovation also gave birth to the link building spam and despite the improvement in the search results which established Google as the top most search engine, it polluted the web with unnecessary content clutter.

But as people kept on flocking the social media sites the search engines thought of using the public opinion as the criteria for quality and word of mouth. How well the search engines will integrate the social media signals only time will tell.

But it is for sure that this will ensure more genuineness as you cannot manipulate public opinion. SEO is what you say about your company social media is what others say about your company. When both these messages are in sync a credibility is established. Hence the authority, credibility, WOM and an overall presence is the demand of the day for true SEO , which in the long run will ensure natural and quality inbound links on its own.

So first work on content, establish an identity, authority and an online credibility and then the links will follow. And if we go to see that was the main goal of the PageRank technology to check how many people vouch for a certain page content but with link spam it got negated . Now with social media signals and focus on quality content via the Panda Update this will surely be taken care off to a great extent.

The best way to achieve great online presence will be to have an equally great offline and real time business presence :

I will not be surprised if in the coming year the blogosphere gets bombarded with blogpost meteors on "THE DEATH OF THE SPAMMY LINK BUILDING INDUSTRY" instead of SEO being dead.

Blog Article / Social Media Post
Our Comment:

Bharati Ahuja11th January 2012

I think the blending of social results in search is not only the inevitable evolution of search but the reflection of what took place when civilizations evolved. We can just say that the stone age of search is over and now search even has the ability to reflect what people in your community are talking about and recommending. It is basic human nature to search for a want and then discuss with peers about their opinions and then take a decision. Since ages we have been doing this but now we have to just adapt ourselves to the virtual world for this kind of an action.

To a certain extent I believe that if Google wants to improve the quality of search results and combat the spam on the web then yes, it is highly essential that the search engine can access data from a resource it has full control on. But, from the search engine perspective only time will tell how well Google succeeds in integrating the social signals from other social media sites from all over the web else with the kind of hold Google has over the search market it is going to be, Google Google all the way…

But its surely not the end of SEO. In fact all these changes are taking SEO to a more qualitative level.

  Archived Blog Posts On Our Blog:

Topic 3 “Not Provided Keyword Data” 

Our Guest Post On The Topic:

  Archived Blog Posts On Our Blog:

Blog Article / Social Media Post

Bharati Ahuja Jan 9, 2012

I think this piece is a great summary about how Google has been offering support to SEOs right from start but can do much more as they have all the data now in fact the data about social signals too.

The awareness of SEO has also improved a period of time and if Google at this stage continues to share more and more information it will become increasingly difficult for Google to maintain and improve the quality of search results. We saw that by 2010 the content and link spam had reached to a great extent for which Google had to come up with the Panda Update.

IMHO especially with regard to Keyword Referrer Data:

2011 was a year of changes and I think it is a period of transition to a better web and better search results as SEO is much beyond keywords and rankings.

When the businesses are at a loss for the complete keyword data the focus is shifted to the search queries in WMT which have a good CTR which is a true measure of quality over quantity.

This restriction makes the website owner think from a larger perspective and focus on the correlation of content and keywords rather than rankings. This will take SEO campaigns above the metrics of keywords and rankings and the focus will be on other quality metrics like CTR , conversions, bounce rate, etc. which will improve the quality of the web overall as the websites besides being rich in content will have to focus on good landing pages, a proper call to action, page load speed and good navigation which will ensure a better UX .

This lack of data will draw the line of distinction between a PPC campaign and a SEO campaign. The quality metrics will be CR and the CTR which again will make the client focus on content and the landing page design which will again be a quality step towards a better web world rather that discussing about keywords the client will be open to discuss about content and design.

Have shared my views also on
Enhanced by Zemanta

Sunday, May 6, 2012

Using Server Log Files To Analyze The Activity On The Server For SEO

Server Logs
Server Logs (Photo credit: novas0x2a)
Data, Data From Everywhere On The Server And Not A Byte To Benefit From....

All those who are in the web solutions business since early 2000 know that prior to Google Analytics , the most trusted analytics data was the log files on the server. In fact those log files are still in fact the most accurate and raw data available for the actual activity taking place on the server.

Server logs are automatically created recording the activity on the server and they are saved as a log file on the server itself. Usually the log file is saved as a standardized text file but it may vary at times depending on the server. Log files can be used as a handy tool for web masters, SEOs and administrators. They record each activity on the server and offer details about – what happened, when and from where on the server related to that domain. This information can record faults and help their diagnosis. It can identify security breaches and other computer misuse. It can be used for auditing and accounting purposes too.

 A plain text format minimizes dependency and assists logging at all phases . There are many ways to structure this data for analysis, for example storing it in a relational database would force the data into a query-able format. However, it would also make it more difficult to retrieve if the computer crashed, and logging would not be available unless the database was available.The W3C maintains a standard format for web server log files, but other proprietary formats exist. Different servers have different log formats. Nevertheless, the information available is very much the same. For example the fields available are as follows: ( It may not be necessarily recorded in the same order on all servers)

· IP address

· Remote log name

· Authenticated user name : Only available when accessing content which is password protected by web server authenticate system.

· Timestamp

· Access request : "GET / HTTP/1.1"

· The request made. In this case it was a "GET" request (i.e. "show me the page") for the file "/" (homepage) using the "HTTP/1.1" protocol.

· Detail information about HTTP protocol is available in

· Result status code : "200"

· The resulting status code. "200" is success. This tells you whether the request was successful or not.

· For a list of possible codes, visit

· Bytes transferred : "10801"

· The number of bytes transferred. This tells you how many bytes were transferred to the user, i.e. the bandwidth used. In this case the home page file is 10801 bytes, or about 10K.

· Referrer URL

· User Agent

Following is the example of the data which was exported to Excel from the log file:

Example 1: - - [29/Apr/2012:05:04:56 +0100] "GET /blog/microsoft-windows-vista-ultimate-with-sp2-64bit-oem/ HTTP/1.1" 404 39621 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +"

Example 2:

On some servers the fields will be mentioned in the log file before recording the data as follows and then the corresponding data for that date will be displayed:

#Fields Per Record: date time cs-method cs-uri-stem cs-username c-ip cs-version cs(User-Agent) cs(Referer) sc-status sc-bytes

Data Per Record: 2012-05-01 01:19:17 GET /seo-web-design.htm - HTTP/1.1 Mozilla/5.0+(compatible;+bingbot/2.0;++ - 200 12288

Well, it is not as geeky as it looks , in fact it is very smiple. The data from the log files can be retrieved easily by importing the text data in Excel or by using standard software available for extracting data from log file like the “WebLog Expert” the sample report generated can be viewed on

The analysis of the log files can offer some great insights about the traffic on the server and many times the spam on the server and a hacking attack can be detected early and the harm on the sites can be reduced to a great extent as the corrective action can be taken immediately. This can be a real boon to every SEO as this data will be reflected in WMT much later.

The data can be filtered out as per the fields which need to be tracked. For example you can see in the image below how the WebLog Expert software shows the data graphically and numerically for the filtering that we did to trace the Google, Bing and Baidu bot activity on a particular domain .

Keeping a track of this data can give us information related to the crawling of the bots , downloads, spam attacks, etc. Of course , after all it is all raw data and just data in itself is meaningless but how you correlate and connect the dots to come to correct conclusions to take the right decisions is what makes the difference.

For me it is a Déjà vu feeling as when we did not have Google Analytics the server log files and Webalizer were the only resource. Sometimes, going retro is the coolest thing to do because, some trends which seem to be new are actually very old.
Enhanced by Zemanta