Award Winning Solutions

Customer Innovation Awards Winner 2012

Innovation Leaders

Industry leading Innovation in

Integrated Customer Service

Genesys Partner of the Year Award 2011 for Innovation

Solutions for Carriers and Service Providers

Applications to generate revenue and reach mass-consumer markets

Contact Centre Solutions

Anana is a major driving force for customer service in the digital age. We thrive in digital and social strategy for forward-thinking brands, driving contact centres from a traditional support role to be an effective spearhead of 21st century digital customer engagement.

We are an award winning solutions innovator, with a heavy focus on digital, mobile, social and multi-channel contact centres, founded upon many years of deployment and operational experience with large scale integrated voice and interactive voice response solutions.

Using Anana consulting, professional, engineering and development services and our innovative solutions based approach, banks, major service providers and top-tier retailers are transforming their traditional call centres.

Applications Innovation

Anana is a specialist integrator and solutions developer of leading solutions for modern telecommunications network service providers. We provide extremely innovative solutions for IMS and next-generation networks. Our applications are specifically designed to thrive in mass-market high volume based service provider contexts, where they drive subscriber revenues through enhanced services and revenue generating applications.

We are the lead application development partner to Alcatel-Lucent's IMS division and developed and showcased the first ever 4G-LTE enabled mash-up applications that mixed social-media with the dial tone, telephone conferencing triggered by Facebook posts, call control at an end user application level and much more.

 

Social Influence reviews with Klout and Peerindex

We are now regularly deploying Social Media enabled contact centre solutions. In these solutions a key functional deliverable is the ability of the Genesys Contact Centre Solution to be able to measure the social influence of the customer, and attach this data, as a score, to the inbound interaction.

Dave Tidwell Klout ScoreWith this data, alongside the measurement of Actionability (what can be done about it), Sentiment (how positive or negative is it) and Classification (what is the interaction about?) we are able to apply significant logic to the routing strategy.  The ability to decide what is important, and how to establish a priority for an interaction is an extremely powerful addition to any customer service paradigm of course.

We have seen two common social-influence scoring engines; both of which offer readily accessible API's for the developer to integrate the score into any scenario.  We have been working with @klout and @peerindex scoring mechanisms for some time now. Our customers are now beginning to ask us whether or not there is a significant difference in how Klout or Peerindex measure social-influence.  

Now, there are literally hundreds of blogs that already discuss this question, so I am not going to attempt to replicate all of that content.  What I am interested in is "Can I see a parallel relationship between increase or decrease in a score in one system being reflected in the other?". In other words, if I increase my social activity for Period X I should see my associated score on Y increase.  Can I see the same effective (within reason) increase on the other engine?

To this end, we created a simple XML (REST) based lookup to each of the associated API's for my own Twitter Account @dave_t_pilot and I interrogate the API for the latest score in very regular periods.  I am looking for a range of things essentially;

  • Evidence that slowing down social-activity online affects both scores
  • Evidence that increasing social-activity online affects both scores
  • Relative stability in the scores over a period of weeks or longer
  • Consistency in the returned results from the API
  • Appreciable parallel impact on scores as they change. In other words, can I see both pointers moving together?

I understand that Klout recently made huge changes to their scoring algorithm. Indeed, I saw my own score go from mid-high 50's down to 40 overnight.  In the meantime my Peerindex has been stable and steady at the 49 level.  This study does not attempt to argue the validity of the scores, or why one is so different to the other, but serves only as a benchmark test for stability and effectiveness of social-influence scoring over a period of time.

Things I already know....;

  • Peerindex is 'taxonomy' based scoring - it provides details of the things about which a user has relevance as well as the score
  • Klout provides scoring to 2 decimal places; Peerindex provides it to the nearest integer only (no decimal places)
  • Klout now provides information about "TOPICS"
  • I am not a developer, so accept the rudimentary nature of this study.  The Paessler monitoring software made this task very easy; but not scientific!
  • I have noticed occasional errors from the API's in both cases; both suddenly reporting scores may off the median mark. Does this signal danger for a contact centre routing strategy? Could the returned score be wrong? How often does the API return bad median scores?
  • The average response time for both API's is about 2.8 seconds from request to response.  Does this benchmark remain true over a month long study?

Methodology

I tend to tweet 6 to 8 times a day of my own volition. I also have several automated Twitter, Facebook and LinkedIn feeds and Google News RSS feeds on my chosen subjects.  I always tweet about my core subjects and maintain consistency, except for location based social-media updates based on Foursquare.  The automated updates to my social-media accounts will continue for a period X, at which point I will turn them off.  What affect will this have?  I also tweet regularly with links to blog content such as this study or any number of specialist themes in customer services, contact center solutions, customer experience and customer effort.  I will attempt to rally several bursts of increased and very intense activity to monitor underlying change in scoring.

First Interim Results - Day 15 of the Study

Note - I have scheduled a lookup against the API's in both cases to run once every 5 minutes, 24 hours a day for the report period.

Klout Results

  • Klout API responded to a lookup request 12,885 times to the score lookup (99.115%)
  • Klout API failed to respond to the lookup request 115 times (0.885%)
  • Klout - monitoring uptime of the API - 99.691% and downtime of 0.309% (unavailable for a total of  1 hour and 10 seconds over the periods of attempted access)

Peerindex Results

  • Peerindex API responded to a lookup request 12,822 times to the score lookup (97.811%)
  • Peerindex API failed to respond to the lookup request 287 times (2.189%)
  • Peerindex - reported monitoring update of the API - 99.646% and downtime of 0.354% (unavailable for a total of 1 hour, 8 minutes and 49 seconds over the periods of attempted access)

Social Network Usage in the Period

  • Twitter - routine use whilst avoiding un-typical use behaviour. I engaged with the timeline in the routine and normal manner
  • Facebook - routine use, typically only work related and generates little comment or feedback (I kept it this way on purpose)
  • LinkedIn - routine use
  • Blogs - I kept activity on the blog to a minimum and posted no new feed updates

Score Appraisal

  • Klout reports scores to 2 decimal places.  Throughout the period of observation the lowest score received was 44.08, highest recorded was 45.63. The average score reported was 44.53. Given the nature of social media usage these results are expected.
  • Peerindex reports scores to nearest integer. Throughout the scoring period the score moved between 48 and 49, with most reported scores at 48.

Anomoly Results

  • Peerindex reported a consistent score via its API of 27 from 02:55am (GMT) on to 07:25am (GMT) on the 6th December 2011. For this 4.5 hour period the API remained constant at 27. Is this a hint that one of the connected 'measured' networks was offline and the PeerIndex API built scores based only on the remaining monitored social network activity?
  • After 20 days of running the API I see more or less nightly drops to a baseline score of 27 - but it quickly recovers to 55 typically within the hour

Average API response times

  • Klout API - 3.216 seconds per lookup
  • PeerIndex API - 3.122 seconds per lookup

Summary

I am pleased by the high level of availability of both API's. Klout appears to be performing marginally better, and I also like the fact that I get scores to 2 decimal places.  If I was a user particularly looking to track my social network activity against an 'improvement' in score I'd be more likely to use Klout - because it gives me more or less real-time feedback on the nature of activities that are assessed.

Next Study Phase

The next phase of the analysis will be to manipulate my on line behaviour to see how quickly the results appear to change in accordance with the activity. No conclusions against the study can be drawn against his first 15 day run-cycle.  It is merely to establish what the baseline is against which manipulation of on line behaviour may be assessed.

Social Activity Increase Test - First Milestone Trigger

I have been carefully increasing social media activity on Facebook, LinkedIn and Twitter over the last 4 or 5 days.  I have also used some automation tools to feed posts from the Blog directly to the social streams at regular but not heavy intervals.  I have noted a 10% increase in my @Peerindex score from its flat baseline of 48 to 53.  Should I now expect to see an appropriate increase in the @Klout score? I expect so.  I will maintain a watch on the scoring to see if the same or similar increase occurs.

Results of this phase, Dec 18th

Despite increasing remarkably the rate of posting on Twitter as primary channel, Facebook as secondary and LinkedIn as 3rd the only difference occurred in the PeerIndex score which jumped from 49 to 55 more or less overnight.  Changes in Klout only occured at 0.0X digit - in other words; remarkably changing online behaviour made no difference to the reported Klout score.  Even heavy engagement did not appear to make any difference to the returned klout score.  Periods of heavy engagement seem to be rewarded with a klout improvement of hundredths of a percentage point.

Therefore, I see no result change in Klout.  This suggests an update to the score happens once every X weeks.  PeerIndex appears to have quickly picked up the change in social engagement.  Now I am confused? Why is my Klout score moving in 10th of points seemingly randomly each day?  Klout seems to have taken absolutely no heed of the increased frequency, engagements, converstations and immersive social activity.

Next Phase - commencing December 18th

I have removed the following networks from my PeerIndex and/or Klout scoring authorisations.  In both cases, the only remaining social network is Twitter.

  • LinkedIn
  • Foursquare
  • YouTube
  • Instagram
  • Facebook

Let's see which of the two engines picks up on this change first and to what extent does removing all the other tied networks impact the reported score.

Baseline on start of test;

  • Klout - 46.64
  • PeerIndex - 55

Update - December 19th

It took just under 24 hours for @Klout to re-assess the influence score based on removing all Social Networks except Twitter to its new level of 42.43.  Therefore, the total decay in removing all other networks was only 4 points. This tends to suggest a heavy weighting on Twitter; which is correct, as it is my primary social network. It has had a much more marked impact on True Reach, Amplification and Network scores.

  • Klout True Reach decayed from 641 to 533 people
  • Klout Amplification Score decayed from 17% to 9%
  • Klout Network Score decayed from 30 to 22

I will update this post when I notice any decay in the PeerIndex score.  The main control panel on PeerIndex is still stating that it was last updated 3 days ago.  Let's see how long it takes to update.  I am now intrigued why the @peerindex score has remained completely static for almost a month; despite heavy social engagement.  As a baseline my total following has remained within plus or minus 10% of the starting figure at 930 followers. Given I tweet and talk only about a pretty restricted range of 'on-topic' subjects in the Customer Services, Customer Experience and Social Media genres the audience is quite restricted.

I also note that the XML - API is not returning accurate scores on @Klout.  3 hours after the online Score was updated by a forced refresh of the browser the Klout API is still reporting the old score.  I am now looking for 2 things;

  • How long will it take @Klout to update the API feed (This is a key one, because it is the score we use in the strategies when we route social media interactions to customer services agents)
  • How long will it take @Peerindex to update the score in any form from its static 55

My take so far is Klout are taking a score based not so much on recent activity, but one based on a window of online activity, say 2 or 3 months; and basing the score on the content found in that window. On this basis, sudden changes in online behaviour have less impact on score. For example, going away on holidays for a week may only marginally impact the score.  Removing all the other networks will have removed the immediate 'audience' and 'reach' from view of the scoring algorithm but NOT the historic context.  This is why the overall klout score moved by less than 10%.

14:45 British Time, the Peerindex score changed! It didn't go down! It went up! The score is now hourly reporting 56; despite having disconnected all the networks authorised for its algorithm 24 hours ago.  I can only assume that this is a result of longer term monitoring!

Christmas Break 2011

I am going to return both influence engines to their standard baselines and connected networks and enjoy a couple of weeks off over Christmas.  In the meantime the engines will be reporting their scores on this blog. See you next year!