Content Classification – The What, Where and Why of Targeting

August 3, 2013

As a Web publisher, it is in your interest to help advertisers reach viewers of your Web site with precision so why-us1that they can better connect and engage with those users. Online advertisers use various methods for reaching the right audience on the right places and at the right time. For example, they have used user’s online activities including browsing history, searches, social interactions and online purchases to uniquely understand online users and then present them with their product & services at relevant times and appropriate sites. Depending on the outcome of active discussions going on around Do-Not-Track (DNT), advertisers might have to lean towards other alternative ways to reach their desired audience – content targeting being one such option. Note that content targeting does not replace audience targeting and in fact, they complement each other very well.

A publisher can accurately classify the content of their online presence. This will let the advertisers combine both audience & content targeting to zoom in to just the right users for their products & services at the right places. Content targeting defines the what, why and where of the user.

For each page that the publisher maintains on its Web site, it can determine “what” the page is about. What are the users looking for when they come to this page? For a blog writer, who specializes in say social media marketing the answer is simpler than a general news page containing news articles that constantly change and there is no simple way to classify general news in few content categories. For a site that sells residential real estate, the content topics are easier to define than a site that has tons of user-generated content, with a plethora of content topics. Knowing what the page is about will help a relevant product/service provider match users’ interest with their marketing message/offer for higher engagement.

A publisher can also establish “why” the users are coming to each page. Are the users looking for information, and hence researching & reading about topics? Or are they looking to provide their opinion/share their thoughts with the world? Are the users buying/selling goods and services on that page/site or are they watching videos to entertain themselves? Knowing why the users come to a page and how they are interacting with it, will help a relevant product/service provider match users’ activities with their marketing message/offer.

A publisher can also expose the location (URL) of a page for targeting so that the advertisers can know “where” the users are coming to. If an advertiser has seen better engagement and results from its previous campaigns on certain pages, it would want to repeat that success by targeting users of that page (and similar other pages) in its campaigns.

As a publisher of the content online, it is in your interest to have your content classified comprehensively and accurately so that advertisers buying ad space on your Web site know exactly what your site is about. Later we will discuss whether this classification is a one-time job or an on-going activity. We will also discuss the situations when it is not easy to identify the topics of a page.

Do You Care Who is Watching Your Ads?

March 19, 2013

Advertisers in the US will be spending about $65 Billion Dollars on their campaigns on TV in 2013, as compared to $17 Billion Dollars on online display ads. These advertisers rely on measuring the success of a TV ad campaign by a well-established metric called GRP (Gross Rating Point). The math is simple for calculating GRP, how many subscribers would have been exposed to an ad and for how many times. Higher the GRP, it is believed the more successful a campaign is. Herbert Krugman’s 3+ theory (http://en.wikipedia.org/wiki/Effective_frequency#Herbert_E._Krugman) forms the basis of this metric, which says that at least three exposures of a commercial are needed for an effective communication. Recently, a Nielsen and Facebook study discussed that optimal frequency of exposure of a social ad is at least 10 (http://aonianow.com/press/2010/06/10/nielsenfacebook-report-quaffs-and-the-value-of-social-media-ad-impressions/). Bottom-line is that higher frequencies can generate brand awareness, which can then lead to more sales and profits for brand advertisers.

The question is do we know who is watching these ads on TV? Are the right people in the room when the ad is being shown? Are the viewers distracted with their smart phones, video games, social interactions, or otherwise, while the TV is on? More importantly, are the advertisers accounting for the wasted marketing dollars on TV advertising? If viewers are not watching your ads, are your marketing campaigns successful! Are you already accounting for wasted ads and are painfully aware of the low ROIs from the TV industry!!

Recent announcements from BSkyB in the UK are a step towards addressing some of these issues, although partially. BSkyB is launching a “tailored” advertising service, targeting ads specific to viewing tastes, household makeups and even postal codes. Commercials for mini vans will only be shown to households with children and ads for top-of-the-range washing machine to higher income homes. TV ad industry in the US and other places has always utilized the location-based targeting, but this is the first time they are looking at other attributes of viewers for targeting. This is TV industry’s attempt to “personalize” their reach.

On the other hand, display advertising industry has long figured out the answers to the personalization needs of brand advertisers. Not just at the household level, it can determine the user’s browsing habits & online interactions at the device level. Since long, ads have been served online based on the nature of the site the user visits, location & demographics (age & gender) of the user, day of the week as well as the time of the day of their online presence, content of the pages they read, their browsing history, and their online targeting choices & registration data. And more recently, by the games they play online, by their online social influence & interactions, by the speed of the Internet, by the device they use to get online, and by their spending online & offline.

Ads in the online world are personalized, far-reaching & are not limited by the prime time.. Your marketing dollars being put to a better use, with higher reach & returns! With more and more content consumed is in the video form, advertisers are forming their campaigns that are consistent across multiple devices.

Viewers are spending more and more time online whether in consuming information, searching, interacting with apps, or staying in touch with friends & family. TV viewership is still staying strong and continues tUS Web vs TVo play an important role in a company’s overall marketing mix. Having said that, the online display industry is ready for increased spend, and yet providing advertisers familiar measurement metrics – GRP. And to top that, the online display industry is working towards measuring whether the ad was viewed by the intended viewer or not, thereby adding the “viewed” attribute to GRP. Advertisers can be more confident about the ROI of their marketing campaigns – not only a relevant ad was served but was also provide a high confidence that the intended audience viewed the ad.

Digital media advertising industry and its personalized ads solution is putting a strong case for budget shift from the strong TV advertisement industry!!!

360-Degree View of Your Customer

February 24, 2010

Let me start with an old story from India. This is a story about five blind men and an elephant. Five blind men are taken to an elephant, without being told what object they are standing near, and they are asked to identify the object. The first person finds elephant’s tail and declares that it is a rope. The second person gets hold the elephant’s leg and identifies it as a pillar. The third person reaches out to elephant’s tusks and claims it to be a spear. The fourth person finds elephant’s trunk and assures that the object is a python, while the fifth person claims the object to be a fan when he feels the shape of elephant’s ear.

As far as all these men were concerned, they were right in their diagnosis. However, they were looking at one elephant from their points of view. This is how many businesses make a mistake of looking at their customers, from the viewpoint of various departments. Sales will look at the same customer from the opportunity and pipeline point of view. Whereas, marketing will measure this customer based on their responses on various campaigns and customer’s propensity to move from a lead to an opportunity. For the finance team, this customer is all about accounts receivables, creditability and delayed payments. And for the support team, this customer could be all about number of support issues and the frequencies of contacts.

What happens if for the sales team a customer is an excellent customer as they have several opportunities worth hundreds of thousands of Dollars with them. But the same customer is not a good customer for the finance people, because they don’t pay on time, have considerable overdue payments, buy low margin items, and have several returns in the last twelve months. For a business to be prudent and successful, they need to be aware of all the aspects of their customers. Every customer-facing department should be able to see what other departments are doing with this customer. A support staff should know that before she annoys this customer on a support issue, there are $150,000 in the pipeline that the salesperson is trying to close this quarter. At the same time, before going in a meeting with the customer to close a deal, the salesperson should know that there are three critical errors pending for that customer for the last two days.

An integrated CRM (customer relationship management) system with a back-office ERP (enterprise resource planning) system is the key to getting a 360-degree view of your customers. This integrated approach has an advantage from the customer’s point of view as well – no matter which department the customer reaches out to, she will get a consistent response from the employees because the employees now have access to the relevant information. The inter-departmental communication barriers are also reduced with the help of such systems, thereby increasing customer’s satisfaction in dealing with your company. Overall, your company gets a competitive advantage in maintaining a good relationship with your customers and higher employee job satisfaction.

There are many integrated CRM-ERP systems in the market from vendors such as Sage, SAP, Microsoft, NetSuite, Oracle, Infor, and Salesforce.com. Some have a SaaS (software as a service) offering, some offer packaged software and few have both the options. Make sure that you do your due diligence in evaluating a good system that is well supported, affordable, has reference sites and is built on a good architecture. Good luck!

Happy Stakeholders!?

January 29, 2010

A software development project always has its stakeholders, whether they are internal ones or external ones. These could be our customers or even members of the finance, sales or marketing teams. They have a lot at stake, ranging from revenues at stake to personal recognition to project cost to even their jobs. During the life of a project if you see some very anxious people in your meetings, chances are that they are the stakeholders, who are often more keen to the see the project succeed than the project contributors themselves.

The question is – do we know of any happy stakeholders? Is “Happy” and “Stakeholders” an oxymoron? I don’t know about you but lately I have seen many happy stakeholders. These are the stakeholders of my Agile projects.

I have been involving our stakeholders at very early stages of the projects, even during the planning stage. They have a say in the feature prioritization and in the scope of a feature requirements (acceptance criteria). They are able to view and validate an early version of the application, before the sprint ends, giving them an opportunity to fine tune the functioning of the application. They have liked the idea that before the application is delivered to them as “done”; they are able to verify what will be delivered to them.

One of our customers got so used to the idea of frequent validations and involved sprint plannings that she started pushing us for more features and more frequent updates. She saw that Agile methodologies are helping in delivering incremental features often and with quality. We, or rather our Agile way of developing & delivering application, were helping her look good in front of her management and colleagues.

In another sprint planning meeting, at the end when the conference call got over, one of the stakeholders declared that this was the best project planning meeting he had ever attended and he is very hopeful of the outcome. And everyone else in the room agreed with him.

Bottom-line, Agile methodologies promote collaborative way of planning, developing and delivering software projects. And I can vouch for it as it has helped me keep the morale of the project contributors high – after all happy stakeholders means project success, which in turn delivers all of us a great satisfaction.

Warning & Disclaimer: Results may vary. Agile methodologies implementation, stage of the project, size of the project, expectations of the stakeholders, time for the project delivery and many more factors might affect the expected outcome. 🙂

Autobiography of a Performance User Story

January 22, 2010

I am a performance requirement and this is my story. I just got built and accepted in the latest version of a Web-based SaaS (software as a service) application (my home!) that allows salespersons to search for businesses (about 14 million in number) and individuals (about 200 million in number) based on user-defined criteria, and then view the details of contacts from the search results. The application also allows subscribers to download the contact details for further follow-up.

I’m going to walk through my life in an agile environment—how I was conceived as an idea, grew up to become an acknowledged entity, and achieved my life’s purpose (getting nurtured in an application ever after). First a disclaimer – the steps described below do not exhaustively describe all the decisions taken around my life.

It all started about three months back. The previous version of the application was in production with about 30,000 North American subscribers. The agile team was looking to develop its newer version.

One of the strategic ideas that had been discussed quite in detail was to upgrade application’s user interface to a modern Web 2.0 implementation, using more interactive and engaging on-screen process flows and notifications. The proposed changes were primarily driven by business conditions, market trends and customer feedback. The management had a vision to capture a bigger pie of the market. The expectation was of adding 100,000 new subscribers in twelve months of release, all from North America. A big revenue opportunity! Because the changes were confined to its user interface, no one thought of potential impact on application performance. I was nowhere in the picture yet!

Due to the potential revenue impact of the user interface upgrade, the idea got moved high up in the application roadmap for immediate consideration. The idea became a user story that got moved from the application roadmap to the release backlog. Application owners, architects and other stakeholders started discussing the upgrade in more details. During one such meeting, someone asked the P-question—what about the performance? How will this change impact the performance of the application? It was agreed that performance expectations of the user-interface changes should clearly be captured in the release backlog. That’s when I was conceived. I was vaguely defined as – “As an application owner of the sales leads application, I want the application to scale and perform well to as many as 150,000 users so that new and existing subscribers are able to interact with the application with no perceived delays.”

During sprint -1 (discovery phase of the release planning sprint), I was picked up for further investigation and clearer definition. Different team members investigated the implications of me as an outcome. The application owner considered application usage growth for the next 3 years and came back with a revised peak number of users (300,000). The user interface designer built a prototype of the recommended user-interface changes; focusing on the most intensive transaction of the application – when a search criterion is changed the number-of-contacts-available counter on the screen need to get updated immediately. The architect tried to isolate possible bottlenecks in the network, database server, application server and Web server, due to the addition of more chatty Web components such as AJAX, JavaScript, etc. The IT person looked at the current utilization of the hardware in the data center to identify any possible bottlenecks and came back with a recommendation to cater to the expected increased usage. The lead performance tester identified the possible scenarios for performance testing the application. At the end of sprint -1, I was re-defined as – “As an application owner of the sales lead application, I want the application to scale and perform well to as many as 300,000 simultaneous users so that when subscribers change their search criteria, an updated count of leads available is refreshed within 2 seconds on the screen.” I was defined with more specificity now. But was I realistic and achievable?

During sprint 0 (design phase of the release planning sprint), I was picked up again to see the impact I would have on the application design. IT person realized that to support revised number of simultaneous users, additional servers will need to be purchased. Since that process is going to take a longer time, his recommendation was to scale the number of expected users back to 150,000. To the short time, user interface designer decided to limit the Web 2.0 translation to the search area of the application and puts the remaining functional stories in the product backlog. The architect made recommendations to modify the way some of the Web services were being invoked and on fine tuning some of the database queries. A detailed design diagram was presented to the team leads along with compliance guidelines. The lead performance tester focused on getting the staging area ready for me. I was re-shaped to – “As an application owner of the sales lead application, I want the application and perform well to as many as 150,000 simultaneous users so that when subscribers change their search criteria, an updated count of leads available is refreshed with 2 seconds on the screen.” I was now an INVESTed agile story, where INVEST stands for independent, negotiable, valuable, estimable, right-sized and testable.

During the agile sprint planning and execution phase; developers, QA testers and performance testers were all handed over all the requirements (including mine) for the sprint. While developers started making changes to the code for the search screen, QA testers got busy with writing test cases and performance testers finalized their testing scripts and scenarios. Builds were prepared every night and incremental changes were tested as soon as new code was available for testing. Both QA testers and performance testers worked closely with the developers to ensure functionality and performance were not compromised during the sprint. Daily scrums provided the much-needed feedback to the team so that everyone knew what was working and what was not. Lot of time was spent on me to ensure my 2-second requirement does not slip to 3-seconds, as it will have a direct impact on customer satisfaction. I felt quite important, sometimes even more than my cousin story of search screen user interface upgrade! At the end of a couple of 4-week sprints, the application was completely revamped with Web 2.0 enhancements with functionally and performance fully tested – ready to be released. I was ready!

Today, I will be deployed to the production environment. No major hiccups are expected, as during the last two weeks I was beta tested by some of our chosen customers on the staging environment. The customers were happy and so were internal stakeholders with the outcome. During these two weeks, I hardened myself and got ready to perform continuously and consistently. Even though my story is ending today, my elders have told me that I will always be a role model (baseline) for future performance stories to come. I will live forever in some shape or form!

Performance Testing and Agile SDLC

January 22, 2010

Is agile software development lifecycle (SDLC) all about sprinting i.e. moving stories from product backlog to sprint backlog and then executing iterative cycles of development and testing? IMHO, not really! We all know that certain changes in an application can be complex, critical or have a larger impact and therefore require more planning before they are included in development iterations. Agile methodologies (particularly Scrum) accommodate for application planning and long-term complex changes to the application in a release planning sprint called Sprint 0 (zero), which primarily driven by business stakeholders, application owners, architects, UX designers, performance testers, etc.

Sprint 0 brings a bit of waterfall process in the agile processes, with two major differences – sprint 0 is shorter in duration (2-4 weeks) and the stress on documentation is not as much as in the waterfall method. In my experience, sprint 0 is more efficient when it is overlapping, so while the development team and testers are working on sprints of the current release; stakeholders, architects, application owners, business analysts, leads (development, QA, performance testing, user interface design), and other personas get together to scope, discuss and design their next release. Sprint 0 is executed like any other sprint, which has contributors (pigs) and stakeholders (chickens) and they meet daily to discuss their progress and blockages. Moreover, sprint 0 need not be as long as the development iteration.

I have seen organizations further divide sprint 0 into two sprints i.e. sprint -1 (minus one) and sprint 0. Sprint -1 is a discovery sprint, to go over user stories to be included in the release and discover potential problems/challenges in the application, processes, infrastructure, etc. The output of sprint -1 results in an updated release backlog, updated acceptance criteria for more clarity, high-level architectural designs, high-level component designs, user interface storyboards and high-level process layouts. Sprint 0 then becomes the design sprint that goes a level deeper to further update the release backlog and acceptance criteria, and delivers user interface wireframes, detailed architectural & component designs, and updated process flows.

The big question is, where does performance testing requirement fit in an agile SDLC described above? While “good” application performance is an expected outcome of any release, its foundation is really laid out during the release planning stage i.e. in sprints -1 and 0. We know that user stories that describe the performance requirements of an application can impact various decisions taken on the application vis-à-vis its design and/or on its implementation. In addition, functional user stories that can potentially affect the performance of an application are also looked at in detail during the release planning stage. Questions like these are asked and hopefully addressed – whether or not the application architecture needs to be modified to meet the performance guidelines; whether or not the IT infrastructure of the testing and production sites are to be upgraded; whether or not newer technologies such as AJAX that are being introduced in the planned release can degrade the performance of the application; whether or not user interface designs that are being applied in the planned release can degrade the performance of the application; whether or not making the application available to new geographies can impact the performance of the application; whether or not expected increase in application usage going to impact its performance; etc. At the end of the sprint -1, the team may choose to drop or modify some performance related stories or take a performance debt on the application.

Going into Sprint 0, the team will have an updated release backlog and acceptance criteria for the accepted user stories. During this sprint, the team weighs application’s performance requirements against the functional and other non-functional requirements to further update the release backlog. At the end of sprint 0, some requirements (functional and non-functional) are either dropped or modified, and detailed designs are delivered for the rest of the stories. Sprint 0 user stories then transition into the sprint planning session for sprints 1-N of the development and testing phase. Throughout these 1-N sprints, the application is tested for functionality, performance and other non-functional requirements so that at the end of every sprint, completed stories can be potentially released.

Agile methodologies also allow for a hardening sprint at the end of sprints 1-N, for an end-to-end functional, integration, security and performance testing. The hardening sprint need not be as long as development sprints (2-4 weeks) and is an optional step in an agile SDLC. This is the last stage where performance testers can catch any issues vis-à-vis performance, before the applications gets deployed to production. But we all know that performance issues found at this stage are more expensive to fix and can have bigger business implications (delayed releases, dissatisfied end-users, delayed revenue, etc.) If the planning in sprint -1 and sprint 0 and subsequent execution in sprint 1-N were done the right way, chances are that the hardening sprint is more of a final feel-good step before releasing the application.

Are We Done Yet?

January 22, 2010

When is a user story considered done in agile projects? Depending on whom in the project I ask this question, the response to this question will be different. A developer might consider a story done when it has been unit tested and its defects have been addressed. A QA person might consider a story done when its functionality has been successfully tested as per its acceptance criteria. An application owner or a stakeholder might consider a story done when the story has been architected, designed, coded, functionally tested, performance tested, integration tested, accepted by the end-user, beta tested, and successfully deployed.

Clearly, a standard is needed to properly define the term “done” in agile projects. Good news is that you can have your own definition for “done” for your agile projects. However, it is important that everyone in the team collaboratively agrees to this definition of done. The definition of done might vary by the adoption stage of agile methodologies in an organization (see figure below). During the early adoption days of agile methodologies, a team might agree that the definition of done is limited to Analysis, Design, Coding, and Functional and Regression Testing (the innermost circle). This means that the team is taking on a performance testing debt from each sprint and moving it to the hardening sprint. This is a common mistake, as most performance issues are design issues and are hard to fix at a later stage.

As the team becomes more comfortable and mature with agile methodologies, they expand the definition of done circle to first include Performance Testing and then include User Acceptance Testing – all within a sprint.

I have some tips for you to include performance testing in the definition of done,

  • Gather all performance related requirements and address those during system architecture discussions and planning
  • Ensure that team is working closely with the end-users/stakeholders to define acceptance criteria for each performance story
  • Involve performance testers early in the project, even in the Planning and Infrastructure stages
  • Make performance testers part of the development (sprint) team
  • Ensure that the performance testers work on test cases and test data preparation while developers are coding for those user stories
  • Get performance testers to create stubs for external Web services that are being utilized
  • Deliver each relevant user story to performance testers as soon as it is signed off by the functional testers
  • Ensure that performance testers are providing continuous feedback to developers, architects and system analysts
  • Share performance test assets across projects and versions
  • Schedule performance tests for off-hours to maximize the utilization of time within the sprint

It is important to remember that even performance tests are code, and should be planned just like coding the application, so it becomes part of the sprint planning and execution.

To me, including performance testing in the definition of done is a very important step in confidently delivering a successful application to its end-users. Only the paranoid survive – don’t carry a performance debt for your application!

Performance Testing for Agile Projects

January 22, 2010

Performance testing is an integral part of every software development project. When I think of agile projects, I think about collaboration, time to market, flexibility, etc. But to me the most important aspect of agile processes is its promise of delivering a “potentially shippable product/application increment”. What this promise means for application owners and stakeholders is that, if desired, the work done in iteration (sprint) has gone through enough checks and balances (including meeting performance objectives) that the application can be deployed or shipped. Of course, the decision of deploying or shipping the application is also driven by many other factors such as the incremental value added to the application in one sprint, the effect of an update on company’s operations, and the effect of frequent updates on customers or end-users of the application.

Often application owners fail to provide an objective assessment of application performance in the first few sprints or until the hardening sprint—just before the application is ready to be deployed or shipped. That is an “Agile Waterfall” approach, where performance and load testing is kept aside until the end. What if the architecture or design of the application needs change to meet the performance guidelines? There is also a notion that performance instrumentation, analysis and improvements are highly specialized tasks which result in resources not being available at the start of a project. This happens when the business and stakeholders are not driving the service level measurements (SLMs) for the application.

Application owners and stakeholders should be interested in the performance aspects of the application right from the start. Performance should not be an afterthought. The application’s backlog in agile contains not only the functional requirements of the application but also the performance expectations from the application. For example, “As a user, I want the application site to be available 99.999% of the time I try to access it so that I don’t get frustrated and find another application site to use”.  Performance is an inherent expectation behind every user story. Another example may be, “As an application owner, I want the application to support as many as 100,000 users at a time without degrading the performance of the application so that I can make the application available globally to all employees of my company”. These stories are setting the SLMs or business-driven requirements for the application, which in turn will define the acceptance criteria and drive the test scripts.

It is important that, if a sprint backlog has performance related user stories (and I’ll bet you nearly all of them do) its team has IT infrastructure and performance testers as contributors (“pigs” in Scrum terminology). During release planning (preferably) or sprint planning sessions these contributors must spend time in analyzing what testing must be performed to ensure that these user stories are considered “done” by the end of the sprint. Whether they need to procure additional hardware, modify the IT infrastructure for load testing, or work on the automation of performance testing; these contributors are an active member of the sprint team participating in daily scrums.  They must keep a constant pressure on developers and functional testers to deliver the functionality for performance testing. After all, the success of the sprint is measured as whether or not every member delivered the final product that fully met the acceptance criteria and on time.

To me, performance testing is an integral part of the agile process and it can save cost to an organization. The more you wait to conduct performance tests, the more expensive it will become for you to incorporate changes. So don’t just test early and often – test functionality and performance in the same sprint!