Businesses Should Guard Their Reputation

31/10/2007

Businesses need to become more proactive about managing their reputation online as the risks of operating on the web are on the increase, according to research from Gartner.

Gartner predicts that by the end of 2010 criminals will routinely use the internet to extort funds from organizations, threatening to damage their corporate reputation by ensuring that routine online search requests will return negative or even libellous results.

“If your business depends on a positive internet reputation, then you have little choice than to explicitly manage that reputation online, “said Jay Heiser, research vice-president at Gartner.

“The internet is like a bad-news Petri dish. Negative information multiplies and spreads with frightening speed and becomes virtually impossible to erase.”

Despite the plethora of reputational resources that are available to assess and help manage reputation — from PR agencies and competitive analysis companies to identity verification services and content analytics tools — a comprehensive ‘scan and alert’ mechanism for the internet does not yet exist, said Heiser.

“Reputational persistence is a unique internet phenomenon that traditional reputation specialists have never had to deal with. The fact is that where the internet is concerned the only way to counteract persistent negative information is to overcome it with a greater weight of positive information. This means getting to grips with internet reputation management.”

Heiser recommended that organizations seeking to proactively manage their reputations online take steps to understand the role that reputation plays in social and commercial relationships and work with PR and marketing to create a reputation management strategy.

He also said firms should take the trouble to educate their employees about how to assess reputation and look for new business opportunities in reputation enhancement.

Heiser also highlighted the importance of establish a policy against allowing your employees to place co-worker recommendations on peer reference sites such as LinkedIn or Ryze.

Annunci

Why Competitive Intelligence?

30/10/2007

In today’s fast changing business world, no-one likes surprises. The ability to be pro-active and not reactive is one of the greatest techniques for creating value within an organization. This requires a continuous process of transforming information into intelligence so that you can manage the future. One of the best tools for making this process work is Competitive Intelligence.

Competitive Intelligence is not just about critical strategic thinking.

Competitive Intelligence can also provide numerous other benefits:

  • Learn more about your own organization – Competitive Intelligence can be used to identify what should be measured within your organization. Competitive Intelligence also focuses on the same critical success factors used in the Balanced Scorecard.
  • Anticipate trends that are unique to your business or industry.
  • Monitor competition, new products, new markets, regulatory action or other external factors critical to your success.
  • Research potential target companies for merger and acquisition.

If you need to better understand the basics behind competitive intelligence, then download this White Paper by Larry Kahaner, author of the book: Competitive Intelligence.


CI Spider: a tool for competitive intelligence on the Web

26/10/2007

Abstract

Competitive Intelligence (CI) aims to monitor a firm’s external environment for information relevant to its decision-making process. As an excellent information source, the Internet provides significant opportunities for CI professionals as well as the problem of information overload. Internet search engines have been widely used to facilitate information search on the Internet. However, many problems hinder their effective use in CI research. In this paper, we introduce the Competitive Intelligence Spider, or CI Spider, designed to address some of the problems associated with using Internet search engines in the context of competitive intelligence. CI Spider performs real-time collection of Web pages from sites specified by the user and applies indexing and categorization analysis on the documents collected, thus providing the user with an up-to-date, comprehensive view of the Web sites of user interest. In this paper, we report on the design of the CI Spider system and on a user study of CI Spider, which compares CI Spider with two other alternative focused information gathering methods: Lycos search constrained by Internet domain, and manual within-site browsing and searching. Our study indicates that CI Spider has better precision and recall rate than Lycos. CI Spider also outperforms both Lycos and within-site browsing and searching with respect to ease of use. We conclude that there exists strong evidence in support of the potentially significant value of applying the CI Spider approach in CI applications.


Bulding a world class CI function

23/10/2007

Dr Alessandro Comai and  Dr. Prescott have been working in a multi-stage research project which purpose is developing a model for building a World-class CI function http://www.world-class-ci.com
 
They are now trying to build norms around the model we have developed. Therefore, they would like to invite you to benchmark your function against the model so that we can collect as many data as possible to build norms.

If you are interested in participating, please contact Dr Alessandro Comai at alessandro.comai@world-class-ci.com  and he will send you the like of our online survey.
 
This exercise will take about 20 minutes and it will give you the access to the model. You will be able to compare your Competitive Intelligence Function against world-class standard! Moreover, we will send you the norms by end of October and offer you a free access to our next multimedia tool (we hope to have it ready by the end of November).
 

Alessandro Comai
BSc. in Engineering, MBA, DEA (Esdae), PhD Candidate (Esade)
e-mail: alessandro.comai@world-class-ci.com
web: http://www.world-class-ci.com 
 


UK government agency to monitor blogs

11/10/2007

by Carlos Grande 

The COI, the UK government’s communications agency, is working on a way to monitor what people say about policy on blogs and internet forums for the media briefings it sends to ministers.

A project by the COI’s Media Monitoring Unit is considering how to add blogs to its regular summaries of government coverage in mainstream press or television.

The summaries are used across Whitehall from ministers to departmental communications teams, often as an early warning service on issues rising up the public’s agenda.

The blog project was in part prompted by departments’ concerns at being caught unawares by debates spread on the web.

It reflects the growing media profile of the format and the fact some individual bloggers are moving from niche self-publishers to establishment opinion-formers.

Clarence Mitchell, director of the MMU, said though there was debate about the objectivity of some bloggers, several were taken increasingly seriously within government.

Mr Mitchell said: “There’s a whole level of debate taking place online which simply didn’t exist before and departments feel they need to be fully engaged in that.”

He insisted any future service by the unit would not intervene in monitored blogs.

However individual departments which took any service might choose to reply directly to bloggers’ criticisms – as they would any commentator – or address points through general media statements.

Pilot studies have looked at pensioners’ online reactions to a recent budget and internet opinions on counter-terrorism measures. They have tracked web traffic generated as well as the tone of discussions.

The blog monitoring would need a sufficient number of individual government departments to agree to cover the extra costs involved. If this happened, MMU estimates a service could operate by the end of the year.

A growing number of companies already monitor blogs in sectors such as technology where online product reviewers can be highly influential.

Universal McCann, the media buyer, recently estimated that more than 50 per cent of UK respondents to an online survey said they had read a blog within the last six months and about 20 per cent had posted comments on their own.

The media buyer said this lagged far behind China and south Korea where blogging – mostly devoid of politics in China – was more widespread, and less likely to be seen as self-interested as it is in the west.

The vast majority of blogs in the UK and the US are abandoned after a relatively short period of time or read by only a handful of friends or contacts.

Published on FT August 15 2007 13:37 | Last updated: August 15 2007 13:37


StOCNET: Software for the statistical analysis

05/10/2007

StOCNET3 is an open software system in a Windows environment for the advanced statistical analysis of social networks. It provides a platform to make a number of recently developed and therefore not (yet) standard statistical methods available to a wider audience. A flexible user interface utilizing an easily accessible data structure is developed such that new methods can readily be included in the future. As such, it will allow researchers to develop new statistical tools by combining their own programs with routines of the StOCNET system, providing a faster availability of newly developed methods.

In this paper the author show the current state of the developments. The emphasis is on the implementation and operation of the programs that are included in StOCNET: BLOCKS (for stochastic blockmodeling), p2 (for analyzing binary network data with actor and/or dyadic covariates), SIENA (for analyzing repeated measurements of social networks), and ZO (for calculating probability distributions of statistics). Moreover, they present an overview of future contributions, which will be available in the near future, and of planned activities with respect to the functionality of the StOCNET software. StOCNET is a freeware PC program, and can be obtained from the StOCNET website at http://stat.gamma.rug.nl/stocnet/.

StOCNET: Software for the statistical


Scoperta e Validazione delle fonti

02/10/2007

Una delle fasi più importanti di un qualsiasi progetto che si propone l’osservazione e l’analisi dei contenuti informativi (in forma testuale) generati dai network, è sicuramente la scelta delle fonti. Si osserva che a tal proposito risulta immediatamente inefficiente una selezione manuale delle fonti effettuata tramite l’uso di keyword sui motori di ricerca oppure, ad esempio, “navigando” nelle strutture tematiche proposte da altre risorse (Xanga, Alexa, Technorati, ecc.). Usare questi metodi significa esporsi ad un certo numero di criticità tra le quali: il rischio di catalogazione di un numero eccessivo di fonti – con relativi fenomeni negativi quali ridondanza o scarsa significatività dell’informazione (fonti troppo specialistiche o eccessivamente generali allontanano il “risultato atteso” da quello ottimale) fino al caso più grave – e più frequente nelle applicazioni reali – costituito dalla scelta delle sole fonti che godono di un miglior posizionamento nella classifica dei motori di ricerca. In questo ultimo caso infatti, la cernita verrebbe in realtà effettuata dai motori (e con parametri del tutto diversi dai requisiti del progetto) e non dall’operatore, che in sostanza non ha alcun controllo consapevole sui criteri attraverso i quali la scelta viene effettuata, ma bensì – cosa assai peggiore – ne ha l’illusione. Un metodo che da maggiori garanzie in tal senso, può essere quello che prevede l’impiego delle ontologie. Costruire una formalizzazione efficace del progetto stesso in tutte le sue componenti e relazioni (compreso il concetto di “fonte” e la sua ontologia) può aiutare l’analista nella fase di scoperta e validazione delle “sorgenti” di informazioni. Non è un controsenso fondare tutto il progetto sul concetto di “fonte”: cos’è infatti una fonte – ontologicamente parlando – se non una “informazione sulle possibilità (di informare) che una risorsa ha” ? Nell’intelligence l’imperativo è “identificare ed estrarre informazioni significative”; se anche la fonte è tipo particolare di informazione (“…io so [informazione] che quella specifica fonte [risorsa] può darmi informazioni su…”) allora è giusto procedere prima con la “identificazione ed scoperta” delle fonti. Per quanto concerne il progetto specifico – che si è deciso di considerare alla stregua di un problema di intelligence – l’intenzione è quella di tentarne una formalizzazione ontologica attraverso Protege. Una volta fatto ciò si procederà con la scoperta, il mining e la validazione delle fonti significative, dalle quali poi si attingerà tutto il materiale informativo indispensabile per le successive fasi del progetto.