It’s been announced that Rob Flaherty, CEO of the Ketchum PR consultancy group, is be keynote speaker at AMEC’s European Summit on Measurement in June. According to Ketchum’s PR Newswire message, he will be calling for “unified effort to establish measurement as key tool”.
I don’t know Rob but have met his predecessors David Drobis and Ray Kotcher. I have a lot of time for the consultancy’s immensely likeable European CEO, David Gallagher who, like me, is a former PRCA chairman. There are lots of very able people at Ketchum, so this “memo to Rob” is offered collegially.
Having researched the history and practice of PR measurement and evaluation since 1992, this issue comes around again and again. Nothing new is said and PR people continue not to evaluate.
I have delved recently into the International Public Relations Association (IPRA) archive which starts in 1953 and am planning to write a paper on the practice topics that appear decade after decade. For industry leaders who think they have a new insight to offer: forget it, it has probably been said ten times before. There is little institutional memory in the business.
For Rob’s address to the Madrid jamboree, here is a timeline about PR measurement and evaluation (see also Watson, 2012).
1905/6: The Publicity Bureau of Boston, which Scott Cutlip says was the first PR agency in the US, developed the ‘Barometer’. It was a researched guide to the attitudes and interests of newspaper editors to help with accurate placement of editorial material.
1920s onwards: AVEs and multipliers start to be used in press agentry and publicity work in the US. They continue to this day.
1928 to mid-1940s: Arthur W. Page uses extensive opinion research to shape AT&T’s communications, public relations and customer-facing behaviours.
1950s onwards: Cutlip and Center’s PII (Preparation, Implementation, Impact) model of PR planning and measurement appears in the still-published Effective Public Relations. Generations of PR practitioners have been taught PII.
1977: James Grunig, in association with AT&T, starts continuing academic research into measurement and evaluation. This leads to a flowering of research and publication that continues to now.
1990: Glenn Broom and David Dozier publish the still-excellent Using Research in Public Relations, which has been used extensively around the world.
1993: Walter Lindenmann, who worked for H&K and Ketchum, introduces his Three-Step Yardstick of Outputs, Out-take and Outcome. It has become the standard terminology of public relations measurement.
Late 1990s: The three-year-long 'Proof' campaign to promote best practice in PR planning, research and evaluation is launched in PR Week (UK) in collaboration with PRCA and the-then IPR (now CIPR).
2010: The Barcelona Principles, a benchmark statement of existing practices, is launched by AMEC and supported by PR organisations widely.
My case is that practitioners have been offered well-developed methods of PR measurement and evaluation from at least the late 1970s onwards. In 2008, a paper by Anne Gregory and me reviewed the range of methodology and called for practitioners to use it. No further basic research was needed, we said. There were no knowledge barriers; it was time to borrow Nike’s theme and for PR people to “just do it”.
There have been innumerable books written and industry initiatives conducted, but there is still very low take-up by practitioners. Only recently, a Ragan survey found that around two-thirds of US practitioners had not heard of the Barcelona Principles. So it’s not methods that are needed, it is for practitioners to open their minds and change their behaviours.
So Rob, when you stand on the Madrid platform with your “roadmap on the future of PR”, please propose that practitioners take their own futures in their hands and apply the PR measurement and evaluation methods that have been around for decades. They are well-proven.
Best wishes, TOM
Here’s some reading to help you prepare the paper
Broom, G.M. & Dozier, D.M. (1990). Using research in public relations. Englewood Cliff, NJ: Prentice Hall.
Gregory, A., & Watson, T., (2008). Defining the gap between research and practice in public relations programme evaluation – towards a new research agenda. Journal of Marketing Communications, 14(5), 337-350.
Lindenmann, W.K. (2006). Public relations research for planning and evaluation. Gainesville, FL: Institute for Public Relations. Available from: http://www.instituteforpr.org/topics/pr-research-for-planning-and-evaluation/
Watson, T. (2012). The evolution of public relations measurement and evaluation. Public Relations Review, 38(3), 390-398. DOI:10.1016/j.pubrev.2011.12.1018.
Reply from Rob Flaherty:
ReplyDeleteTom,
Thanks very much for your memo and thoughtful input. I am sure I will refer to this post and your 2012 article in Public Relations Review as I prepare my remarks. Like you, I consider myself a student of public relations history and have been immersed in the measurement conundrum for decades. I agree that it's not new methods we need but behavior change and wide adoption of standards. We come at this from the same angle. I'll try to advance the cause in Madrid. Thanks.