Notes
Article history
The research reported in this issue of the journal was funded by the HS&DR programme or one of its preceding programmes as project number 11/2004/10. The contractual start date was in January 2013. The final report began editorial review in July 2014 and was accepted for publication in November 2014. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HS&DR editors and production house have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the final report document. However, they do not accept liability for damages or losses arising from material published in this report.
Declared competing interests of authors
none
Permissions
Copyright statement
© Queen’s Printer and Controller of HMSO 2015. This work was produced by Davies et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Chapter 1 Introduction
Background: evidence and health care
It is likely that the quality and effectiveness of health care could be significantly enhanced if services were aligned more closely with the available research evidence. 1,2 Improvements could be seen in both the nature and types of services and treatments provided and the processes by which these services are delivered (e.g. their location and duration, the financial and organisational infrastructures underpinning them and the personnel involved).
There are growing bodies of research knowledge from a range of fields that have the potential to inform all aspects of health-care organisation and delivery. 3 For example, studies on diagnostics, prognostics and therapeutics provide fine-grained understandings about the nature of ill-health, its assessment, potential causal pathways and likely trajectory, and the scope for amelioration through health-care intervention. Epidemiological and clinical epidemiological studies are augmented and complemented by a wide variety of health services research – research aimed at exploring the lived experience of ill-health, understanding health-care-seeking behaviours, and predicting responses of patients to service encounters (e.g. clinic attendance/reattendance and concordance with treatment regimes).
Knowing what to do, how to do it, and the likely consequences of clinical interventions are only parts of the knowledge picture to which research contributes. 2 Substantial bodies of research contribute to important policy questions by, for example, mapping health inequalities and health-care disparities and the dynamics of these, tracking the consequences of system incentives, and clarifying the challenges and opportunities of various performance and regulatory frameworks.
Macro policy and frontline practice are not the only domains in which research has much to offer: on managerial and organisational matters too there is now a large and growing research resource that explores, for example, the dynamics of change and leadership in health-care organisations; the development and composition of multidisciplinary teams; and the costs/consequences of different models of service design. 4,5 An important subset of this work is that devoted to understanding quality and safety issues in health care (improvement science). 6 In this area, many diverse models of quality improvement have been developed or (more often) imported from industry, models such as ‘lean’, ‘six sigma’ or ‘business process reengineering’. Many of these now have associated research literatures offering insights and evaluations as to their impacts, challenges, prerequisites and (more rarely) costs. 7,8
Evidence making a difference
Despite this growth of research activity and despite the considerable attempts to focus and prioritise research effort on areas of greatest knowledge needs,9 it is widely recognised that the linkages between research effort, knowledge enhancement and informed action are not yet working to best effect. 10–13 Better understanding of how research-informed knowledge can be shared and applied in health care remains a key challenge.
Recognition of this gap between what is already known and what is carried out in practice has led to a number of initiatives to close the gap between evidence and policy and practice in health care and in other sectors such as education, social care and criminal justice. These initiatives exist alongside growing recognition of the challenges inherent in attempting to change complex social systems and the need to attend to a wide range of political, psychological and organisational factors. 14
Framing the problem
How the problem of closing the gap between research evidence and policy and practice has been framed in the literature has altered in the past two decades. Traditional thinking on research use in health care suggested that it was a largely linear, rational, instrumental process and that the provision of particular organisational supports (e.g. continuing medical education; mechanisms to increase access to information and guidelines; clinical audit, etc.) would be sufficient to ensure that health professionals’ practice was in line with the evidence. This view has been subject to increasing challenge from a growing body of evidence that suggests that, far from being a simple linear process, research use is instead an intensely social and relational process. 15–17 This means that a range of interventions (around system design, organisational infrastructures and the facilitation of relational and interactive approaches) are required to better connect research to policy and practice. Such interventions seek to enable research-based knowledge to be combined with other forms of knowledge, to be tested, refined and assimilated, and to be integrated into the thinking and behaviour of individuals and groups.
Building on this interactive, social and situated understanding of how research-based knowledge actually has influence, more sophisticated models of knowledge mobilisation strategies have been developed. In health care, recent reviews of these1,16,18–21 have identified the diverse languages used to describe new models – such as knowledge translation, knowledge exchange, knowledge interaction, knowledge intermediation and knowledge mobilisation. 22,23 These reviews have also begun to map out the empirical support for various mechanisms thought to promote effective knowledge sharing.
Terminological debates are important, as they have the potential to reveal unexamined assumptions about the nature of knowledge and the processes of knowledge sharing. 3,24 Moreover, the multiplicity of terms can sometimes act as a barrier to clear communication and be an obstacle to engagement with non-academic actors. 1,22,23,25 Subsequently, in this study we explore with our field sites the breadth and value of diverse terminology; here, for simplicity’s sake, we consistently use the broad term ‘knowledge mobilisation’ as a shorthand for the range of active approaches deployed to encourage the creation and sharing of research-informed knowledge. Such a convenience is not intended to short-circuit the legitimate debates about terminology, to which we return in due course.
As the ‘knowledge mobilisation’ literature expands, several developments are apparent. Knowledge mobilisation researchers20,26,27 have argued that the ‘complex systems’ theories that are beginning to permeate thinking on health care delivery need also to be applied in thinking about knowledge use. There is growing emphasis on the need to attend to issues around power and conflict over what constitutes ‘knowledge’ in a given time and context16,24,28–30 rather than treating it as an objective and unproblematic entity. There have been increasing calls31 for researchers to take a more multidisciplinary approach to knowledge use in health care while recognising that seeking convergence across these diverse fields (e.g. technology, organisational learning or political science) is neither feasible nor desirable. 16,32 The growth of implementation science as a discipline has called attention to the need for more robust evidence on all of the stages of implementation and on the ways in which organisational change is effected,33,34 including sustainability and scale-up. 35 At the same time there has been recognition of the need for focused knowledge mobilisation rather than pursuing the ‘knowledge translation imperative’ in a more scattergun manner. 36 There have also been calls for careful prior consideration of existing ‘naturalistic’ knowledge exchange processes within an organisation prior to planning and implementing formal knowledge exchange interventions. 37
Innovations in the field
Alongside, and at times independently of, these new ways of thinking about knowledge use, there have been a range of organisational and policy initiatives which have created new structures, networks and configurations aimed at increasing the use of research-based evidence in health-care policy and practice. In the UK these have included the second round of 5-year funding (2013–18) for the Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) and the second phase of funding (2013–18) for the UK Clinical Research Collaboration (UKCRC) Public Health Research Centres of Excellence aimed at integrating research, policy and practice in public health. Internationally, we have seen the development of the Global Implementation Initiative (www.globalimplementation.org), and considerable investment by many countries in a range of knowledge mobilisation infrastructures and processes. 11,12 The European Commission has recently undertaken several initiatives to endorse the use of evidence in policy-making in relation to children’s services. 38
Innovation has also been seen in other areas of public service delivery, such as social care and education. For example, the Education Endowment Foundation (EEF) was established in 2011 as an independent grant-making charity dedicated to breaking the link between family income and educational achievement. In addition to identifying, funding and evaluating promising innovations, it works to encourage schools and other stakeholders to apply evidence and adopt innovations found to be effective. It has developed a widely used toolkit and is currently funding a series of studies to test various ways of improving research use. These include testing the effectiveness of research champions, research learning communities, and a structured school improvement process. Many of the EEF’s funded studies are led by schools.
In social care, the National Institute for Health and Care Excellence (NICE) Collaborating Centre for Social Care (NCCSC) has been established to co-produce guidance about social care with people who use care, their families and friends, care providers and commissioners. The NCCSC is a consortium led by the Social Care Institute for Excellence (SCIE). SCIE has been at the forefront of efforts to involve users and carers in all aspects of its activities. It has also been innovative in the way in which it supports organisations to learn and use evidence, for example through its Learning Together programme of support that helps organisations to learn systems thinking and improve how they safeguard adults and children.
These parallel developments in other sectors have the potential to give additional insights for application in health care. 15,39 In addition, cross-sector initiatives are springing up. One recent addition to the landscape here is the Alliance for Useful Evidence (www.alliance4usefulevidence.org/), which aims to provide a focal point for increasing the use of social research evidence in the UK in a wide variety of settings.
Linking theory and practice in mobilising knowledge
Despite rich conceptual development in the literature on knowledge mobilisation and a wide variety of practical initiatives to mobilise knowledge, to date there has been little systematic research effort to map, conceptualise and learn from these practical initiatives, or to investigate the degree to which they are underpinned by contemporary thinking as set out in the published literature. This gap is particularly apparent when looking at knowledge mobilisation at the ‘macro’ level: the activities undertaken by organisations that are major research funders, major research producers or key research ‘intermediaries’ (e.g. policy organisations, think tanks or boundary spanners). More attention has been paid to knowledge mobilisation at the organisational level (understanding service redesign and organisational change) or to initiatives focused at the individual level (e.g. on changing the behaviour of health professionals). This study aimed to fill this gap and to learn from the macro-level knowledge mobilisation strategies developed and applied in health care, both in the UK and internationally. In addition, the study aimed to learn from the experience of the social care and education sectors in the UK, which face similar kinds of challenges in applying social science research to the delivery of public services.
Study aims and objectives
The overall aim was to harness the insights from a growing body of new approaches to knowledge creation, sharing and use and to draw out practical lessons that could be used to make current and future NHS initiatives around research use more effective.
The study had three key objectives with associated research questions (RQs):
-
Mapping the knowledge mobilisation landscape
-
What knowledge mobilisation strategies have been developed in health care (in the UK and internationally) to better promote the uptake and use of research?
-
What analogous knowledge mobilisation strategies have been developed in social care and education within the UK?
-
-
Understanding the models, theories and frameworks that underpin approaches to knowledge mobilisation
-
What models, theories or frameworks have been used explicitly – or can be discerned as implicit underpinning logics – in the development of the knowledge mobilisation strategies reviewed?
-
What evidence is available from existing reviews and secondary sources on the mechanisms of action of these models, theories and frameworks?
-
-
Learning from the success or otherwise of these enacted strategies
-
What evaluative data are available on the success or otherwise of enacted strategies (i.e. the strategies and approaches being used by agencies), and what do these data suggest are the most promising approaches to successful knowledge mobilisation?
-
What formative learning has accumulated through the practical experience of the programmes as implemented?
-
Concluding remarks
Having set out the rationale for this project, and laid out the study aims and objectives, we have organised the rest of the report as follows. The next chapter (see Chapter 2) sets out the methods of the interlocking strands by which we explored these RQs. Chapter 3 presents the findings from our review of reviews, including the creation of a conceptual map. There then follow three chapters that present, analyse and interpret the data collected: Chapter 4 focuses on the interview data; Chapter 5 provides an account of the findings from the web survey; and Chapter 6 provides an integrative analysis across theory and practice by exploring the deeper patterns of agency activities and emphases. The final chapter (see Chapter 7) develops the discussions further and returns to our original research aims as a means of drawing further conclusions.
Chapter 2 Methods
Introduction
This chapter describes how the different stages of the study were carried out. In brief, data were collected in the following ways:
-
Desk research (literature) (months 1–15): we conducted a review of published reviews (total of 71 reviews uncovered) in order to be able to map the theoretical and conceptual literature. A key output from this work was a visual and textual ‘conceptual map’ of key issues in mobilising knowledge.
-
Desk research (agencies) (months 1–15): we identified key agencies for further examination (major research funders, research producers and key research intermediaries: 186 in total), gathering basic descriptive information on their knowledge mobilisation activities from websites and other publicly available resources. Health-care agencies were explored internationally (with the obvious limitations imposed by the need to find English-language resources); social care and education agencies were limited to those in the UK. With regard to education agencies, our focus was on those that concerned themselves with education in schools (i.e. age 4–18 years).
-
Interviews (months 4–12): in-depth qualitative interviews with key individuals in agencies analysed using a thematic framework supplemented the data gathered from desk research (52 interviews with 57 individuals drawn from 51 agencies).
-
Web survey (months 13–15): a bespoke web survey was used to add greater breadth to the understanding drawn from earlier strands of the work (response rate 57%; n = 106).
-
Participatory workshops: two workshops (month 6 and month 16) were used to create discussion and greater insight into our emergent findings (28 and 25 participants, respectively).
-
International advisory board: we used regular teleconferences and e-mail discussion with our international advisory board (eight members) to deepen and strengthen the work.
Table 1 shows how the RQs map to the different strands of the study.
RQs | Study strands |
---|---|
|
RQs 1a and 1b:
|
|
RQ 2a:
|
RQ 2b:
|
|
|
RQ 3a:
|
RQ 3b:
|
An account of each of these aspects is provided in sequence, but in practice there was a lot of interplay between the different strands of the project. For example, the interviews were shaped by the review of reviews and the desk research on key agencies; the content of the web survey drew on the review of reviews and on the interview data; and the initial development of the conceptual map was accomplished using the review of reviews, but the conceptual map was then developed further by the addition of the knowledge mobilisation archetypes (see Chapter 6) which were derived from the interview data.
Ethics committee approval
In accordance with the policies of the Health Services and Delivery Research (HSDR) programme and the University of St Andrews, we applied for (and received) ethical approval from the University Teaching and Research Ethics Committee of the University of St Andrews.
A large part of the work involved accessing publications and collating information which was already in the public domain or gathering data about activities that are carried out in the public domain. As such, no significant ethical issues applied to the study beyond the standard ethical requirements common to all research with participants (informed consent; voluntary participation; safeguarding confidentiality). Those requirements were met by the following actions during the study:
-
providing participants with detailed information about the study
-
advising participants that participation was voluntary and that they were free to withdraw at any time without having to provide an explanation
-
obtaining written consent for the interviews and making clear to potential participants that attendance at the workshops or participation in the online survey would be regarded as consent to take part in the research
-
advising interview participants that the nature of the study was such that it would be inappropriate to withhold details of organisations carrying out specific knowledge mobilisation initiatives or programmes (particularly as much of this knowledge was already in the public domain) but that data would not be attributed to individuals
-
advising interview participants that they were entitled to ask for sensitive data to be kept confidential or disclosed only in aggregated/anonymised form
-
advising workshop participants that the workshops would be conducted under the Chatham House rule [i.e. that participants agree that they are free to use the information received but not to reveal the identity or affiliation of the speaker(s) or of other participants]
-
advising survey participants that the quantitative survey data would be published in aggregate form and would not refer to individual organisations or respondents and that where free-text quotations were used they would not be attributed to specific organisations or individuals but would refer only broadly to sectors (e.g. health-care research funder) unless specific permission had been obtained from the respondent.
During the conduct of the study, no significant ethical issues or concerns arose, although we did gain a heightened sense of the sensitivities of talking about individual agencies by name even when there was extensive information on their knowledge mobilisation activities in the public domain. For this reason we have refrained from identifying agencies by name in the findings section of this report.
Desk research
Review of research reviews of knowledge mobilisation
The first stage of constructing the conceptual map involved reviewing key research reviews of knowledge mobilisation to identify the main knowledge mobilisation models and strategies in health care, education and social care.
We were already aware of many of the key reviews in this field from our prior work in this area. In order to augment this collection systematically, we contracted with The King’s Fund Information and Library Service (health and social care databases) and with the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) (education databases) to design and carry out systematic searches of key databases in each field. We had detailed discussions with the information scientists, both in advance of and during the searching process, to clarify the objectives and parameters of the search and to refine the search terms. Details of the searches, including the databases searched and the search terms used, are given in Appendix 1.
Excluding duplicates, the search yielded 142 items in health and social care databases and 362 items from the education databases; the larger number of items from the education databases resulted from the more limited indexing facilities in these databases which led to a greater number of less relevant items being retrieved.
The abstracts were then read by the three members of the research team (at least two members for each of the searches) who allocated ‘scores’ to each review on the basis of its apparent relevance to the study: 1 = include (core paper); 2 = of some interest; 3 = marginal; and 4 = not relevant. Those papers which attracted a score of 1 or 2 from either or both of the assessors were then retrieved and read in full by one member of the research team (AP). Papers which met both of the following criteria were retained: substantial research review of the field of knowledge mobilisation; of relevance to health care, social care or education. This included conceptual development papers that contained within them a substantial review section relevant to our concerns. We identified further relevant reviews from reference lists of our key reviews and drew on the expertise of members of the advisory board and our key contacts in social care and education to check whether or not there were any additional key reviews (published or in press) that we had missed. Thus, we were satisfied that we had identified the main research reviews relevant to the development of a conceptual map.
This process resulted in our identifying a total of 71 reviews (listed in Appendix 2), which formed the core set of reviews. At the outset of the study we had anticipated that there would be around 30–40 relevant reviews; however, the process described above uncovered rather more. This somewhat increased the scale and scope of the task, but provided a rich set of conceptual concerns on which we could draw.
Initial development of the conceptual map
The initial development of the conceptual map was carried out as part of the desk research and is described here and in Chapter 3. The conceptual map was then developed further by the addition of knowledge mobilisation ‘archetypes’, which were derived from the interview data; this part of the process is described in a later chapter (see Chapter 6).
The aim of the conceptual map was to provide a ‘map’ of the knowledge mobilisation terrain as it applies to key research funders, producers and intermediaries developing their knowledge mobilisation strategies. The objective was to develop a tool that could be used both for analytic purposes (for the exploration of data gathered later in the study) and for communication purposes (as a practical way into the literature for use by agencies considering the development of their knowledge mobilisation strategies). The first stage of developing the conceptual map was to set out visually the domains identified inductively by our reading of the major reviews of knowledge mobilisation. One member of the research team read all of the reviews and prepared a document for the other researchers summarising the key points from each review as they related to the focus of the study, that is the knowledge mobilisation strategies and approaches of research funders, research producers and research intermediaries; on average, each review took up around one side of A4 in this summary document.
Having read the summary document and drawing on their existing knowledge of the literature, the research team then discussed the key domains that needed to be included in the conceptual map; the aim was to distil the core conceptual areas underpinning knowledge mobilisation and to do so with knowledge of the existing typologies but without seeking to follow or reproduce these. This discussion resulted in six domains being agreed on: knowledge of all kinds; connections and configurations; purpose(s) and goals; people, roles and positions; actions and resources available; and context.
The members of the research team then independently reread the summaries of 10 of the key reviews to check whether or not these initial six domains appeared sufficient to capture the core conceptual areas. One member of the research team then read through the whole of the summary document again to extend this initial ‘checking’ process to the whole group of key reviews (i.e. to check whether or not all of the key issues raised by the reviews could fit well under these domains or if there were additional domains that needed to be added). No further domains were added to the conceptual map as a result of these two processes.
Once the main domains for the conceptual map had been agreed, these domains were used to structure the further exploration of the key reviews (e.g. to consider the key concepts, issues and theories covered under each domain, the evidence underpinning them and the implications for agencies). The findings from this work are set out in Chapter 3.
Mapping of the key agencies
Early in the study we carried out careful mapping of the key research funders, major research producers and key research intermediaries (e.g. research collation agencies, think tanks, charities, professional and membership organisations) in health care in the UK and internationally, and in social care and education in the UK. Our focus was on agencies that had substantial investment in programmes of knowledge mobilisation rather than on delivery units, policy settings or other sites of research application. Throughout the study we used the terms ‘agencies’ and ‘organisations’ interchangeably as useful shorthand terms to cover a variety of organisations.
In order to compile a list of organisations to research, we first drew up from our own previous work and from key reviews an initial list of potential organisations in each sector. In the case of health care, we included organisations in the main English-speaking countries/regions outside the UK known to be active in knowledge mobilisation; in the case of social care and education, we included organisations in the UK alone, in line with the remit of the study. For the UK, we also compiled a cross-sector list in recognition that many UK organisations work across more than one sector. We then consulted two key contacts from each of the social care and education sectors for their comments on the initial lists (social care and cross-sector and education and cross-sector as appropriate) and their suggestions of additional organisations. This was an additional stage carried out in preparing the list of education and social care agencies in recognition that these sectors were less familiar to two members of the research team.
The initial lists were then circulated to contacts in each sector so that they could be ‘validated’ (described below) and expanded further. The social care and education contacts were different from those who had been consulted when the initial lists were being drawn up. All of the contacts at each stage were individuals who were known to have considerable expertise and experience of both knowledge mobilisation and their sector. Individuals were asked to review the lists of organisations in their region/country and sector as follows: UK social care and cross-sector (two contacts); UK education and cross-sector (two contacts); UK health care and cross-sector (three contacts); continental Europe health care (three contacts); health care USA (three contacts); health care Canada (four contacts); and health care Australia and New Zealand (three contacts).
Reviewers were asked to give a ‘broad brush’ rating to each organisation on the list using a scale from 1 (lowest) to 3 (highest) to show which of these organisations were responsible for a significant scale of knowledge mobilisation activity and/or pursuing especially innovative approaches to knowledge mobilisation. We were not looking for absolute consensus but instead were seeking a pragmatic assessment from experts in the field to help us to compile a list that was comprehensive and yet manageable within the constraints of the project.
As an additional check on organisations that we might have missed, participants were also asked to add any key organisations to the list. The process was augmented by e-mail and telephone conversations with some of the participants, which provided additional detail on organisations, particularly those outside the UK with which the research team was less familiar.
We then used the ratings and the additional comments that participants provided to allocate organisations to three groups according to the following criteria:
-
Group A: major players in scale or degree of innovation in knowledge mobilisation: selected for interview (preceded by website research); included in sample for web survey (n = 55).
-
Group B: agencies with significantly interesting knowledge mobilisation activities: included in website research, with the possibility of an interview if the website research uncovered particularly interesting initiatives that merited further research by the team; included in sample for web survey (n = 131).
-
Group C: organisations that carried out knowledge mobilisation activities as part of their role and were, therefore, part of the wider knowledge mobilisation field, but that did not appear to be conducting knowledge mobilisation activities that were significant in terms of scale or degree of innovation – no further action (this broader hinterland of knowledge mobilisation agencies was not fully enumerated and included all of those agencies that came to our attention but which were not considered further for more detailed review).
In order to seek wider confirmation that the list of organisations we had compiled was comprehensive, we posted messages to two key e-mail discussion lists used by the UK research communities (and with some reach beyond the UK): JISCMail Health Services Research list (around 550 subscribers) and JISCMail Evidence Use list (around 200 subscribers). We asked subscribers to review our provisional combined list of organisations and to contact us if they were aware of any prominent research funders, major research producers and key research intermediaries that were particularly active or innovative in knowledge mobilisation but that did not appear on our list. These postings resulted in our being notified of 20 additional organisations, of which we added six organisations to group A (website research, interview and web survey) and eight organisations to group B (website review and web survey); six of the organisations suggested fell outside the remit of the study or were subsumed under other organisational entities already considered.
At the end of this mapping exercise we had a list of 186 organisations for inclusion in the study, of which we had categorised 55 organisations as being in group A (30 in health care, 10 in social care, seven in education and eight cross-sector organisations) and 131 as being in group B. Analysis of the activities of these organisations addresses RQs 1a and 1b, and the organisations themselves are listed in Appendix 3. Interview organisations appear in Appendix 3 in bold font.
In line with our RQs, our aim was to map the knowledge mobilisation strategies that have been developed at the agency level, to understand the models, theories and frameworks underpinning them, and to learn from their success or otherwise (so addressing RQs 1a, 1b, 2a, 3a and 3b). Thus, our primary objective in compiling the list of organisations was to ensure that we had captured the main strategies, approaches and innovations in use. Our intention was not to compile an inventory of all of the organisations carrying out knowledge mobilisation in their respective sectors, and our final sample of agencies should not be interpreted in this way.
This sampling of agencies underpins our interview strategy and web-based survey work described later in this chapter and reported in Chapters 4 and 5.
Review of evaluations of knowledge mobilisation work by the agencies
As part of our review of agencies’ knowledge mobilisation work from their websites, we looked for evidence of evaluations by the agencies of these activities, we considered the evaluation methods used, and we looked for any formative or summative learning reported in the evaluations (addressing RQ 2b). We supplemented these web searches by specifically asking interviewees for information about any informal or formal, internal or external evaluations of their knowledge mobilisation activities, the criteria and methods used for such evaluations and the key findings. Where such evaluations were publicly available, we requested copies of relevant material from the interviewee or downloaded them from the agency’s website as appropriate. We report findings from this work in Chapter 4.
Interviews
The aim of the interviews was to supplement the picture gained from the website review by exploring in more depth key issues about each organisation’s approach to knowledge mobilisation. The specific objectives in relation to the RQs were to elicit clearer understandings about the models, theories and frameworks that (explicitly or implicitly) underpinned the development of specific knowledge mobilisation strategies by agencies (RQ 2a); to identify formal and informal evaluative work (RQ 3a); and to tease out the local learning from implementation challenges (RQ 3b).
The organisations selected for interview were those responsible for a significant scale of knowledge mobilisation activity, and those highlighting especially innovative approaches. The detailed process of selecting organisations to approach for interview was described above. We approached 55 organisations by e-mail and invited them to participate. The approach included brief details of the study together with links to the project website and to the project webpage on the National Institute for Health Research (NIHR) HSDR programme website. Attached to the e-mail were a participant information sheet, giving further information, and a consent form (see Appendix 4 for both documents). Four of the 55 organisations approached did not proceed to interview because the organisation declined to participate or did not respond to the interview invitation. In total, we conducted 52 interviews with 57 individuals from 51 organisations. This included five interviews which had two participants and three organisations for which we carried out two interviews at their suggestion of additional participants. Two interviews spanned the work of two organisations. The majority of the interviews were conducted by phone, although seven of them were conducted face to face. Most of the interviews lasted for 30–60 minutes, although a few extended to 80–90 minutes.
The interviews were semistructured and, therefore, followed a topic guide (see Appendix 5) while allowing scope for participants to raise other issues. The main topics covered were the interviewee’s perception of the organisation’s role in knowledge mobilisation and the main innovative activities; the terminology commonly used in the organisation to describe these activities (e.g. knowledge transfer or knowledge exchange); the history, origin and development of the knowledge mobilisation approaches used; any models, theories and frameworks that had been used in developing and using these approaches; the nature of and results from any formal or informal evaluations that had been carried out by the agency or of which the agency was aware; any formative learning or practical experience that had accumulated through the agency’s use of the approach.
Prior to each interview, the organisation’s website was reviewed to obtain initial information on the organisation’s remit, knowledge mobilisation approach and knowledge mobilisation activities. Interviewees were supplied with a further copy of the participant information sheet together with a broad list of topics that the interview would cover to assist in brief preparation if they wished. Interviewees were asked to sign the consent form prior to the interview. This included giving consent for the publication of identifiable or attributable data (i.e. data that would clearly identify and attribute data collected to the participant’s organisation but not to a named individual), except where the interviewee requested that material be anonymised or kept confidential or where material was particularly sensitive (interviewees were advised that such data would be anonymised, e.g. by aggregating findings). The research team reflected further on this requirement during the process of analysing the interview data. The initial rationale, as set out in the information sheet given to interview participants, was that attributing data to organisations was important in terms of contextualising the data and maximising the potential learning for other organisations from the study. Furthermore, many of the knowledge mobilisation initiatives would already be in the public domain. However, it became clear during the process of analysing the interview data that much of the interview material was sensitive (e.g. around formative learning in relation to individuals’ roles or ongoing issues involving key stakeholders) and, as a result, needed to be anonymised. It was, therefore, agreed that all of the interview data would be anonymised; we allocated each organisation a number and these numbers are used to label verbatim quotations in this report.
All three academic members of the project team were involved in conducting the interviews and in data analysis to make the interviewing task manageable within the time scale and to contribute to the richness and robustness of data collection and analysis. Additional assistance with conducting 13 of the interviews was given by an experienced research assistant. This meant that the 52 interviews were spread fairly evenly across the four interviewers. The team held regular discussions during the interview phase of the study to consider the emerging data and to ensure consistency of approach between interviewers.
The interviews were taped and transcribed. Thematic analysis formed the first stage of data analysis: the data were first reviewed to draw up a preliminary framework of emergent themes around the development of knowledge mobilisation approaches by the agencies. This initial framework was augmented and adapted with reference to the main themes underpinning the interview topic guide. The revised framework (Table 2) was then applied iteratively to the data. Disconfirming data were sought out at each stage. This stage of the analysis led to a detailed account of the knowledge mobilisation activities of the 51 organisations, their underpinning theories, the factors that had led to their development and the formal and informal learning that had resulted. These findings are the dominant focus of Chapter 4.
Main theme | Subthemes |
---|---|
How the agency sees its role in relation to knowledge mobilisation | Terms that are commonly used in the agency to refer to knowledge mobilisation activities |
The main groups that the agency sees as their audience/users | |
PPI activities around knowledge mobilisation | |
The main knowledge mobilisation activities, approaches and strategies | Activities or approaches that the agency describes as innovative |
The origins of these activities, approaches and strategies being used in the agency | Factors contributing to the development of these knowledge mobilisation activities, approaches and strategies |
Models, theories and frameworks used in developing and using these activities, approaches and strategies | |
Planned or recent changes in knowledge mobilisation activities, approaches and strategies and the origins of these | |
The influence of other organisations on the agency’s knowledge mobilisation activities, approaches and strategies | |
Formal and informal evaluation of the agency’s knowledge mobilisation activities | Formal and informal evaluations of the agency’s knowledge mobilisation activities carried out by or on behalf of the agency: the nature of these evaluations, the evaluation criteria used, the main findings and any changes that resulted |
Evaluations planned for the future | |
Formative learning and practical experience | Advice that the interviewee would give to a colleague in a similar organisation |
Any additional themes emerging from the data |
By this stage of the analysis, it had become clear that there were patterns of knowledge mobilisation activity emerging from the interview data and that there was scope for deriving knowledge mobilisation ‘archetypes’ from these data. Secondary analysis of the interview data was therefore carried out: from repeated reading of the interview transcripts and an assessment of them in the light of the literature reviewing and the conceptual map, eight archetypes emerged inductively from our data. This is described more fully in Chapter 6.
Web survey
The aim of the web survey was to provide some assessment from the wider field of key agencies involved in knowledge mobilisation in health care, social care and education of the degree of consensus and contention about the emerging findings. The interviews focused on those agencies responsible for a significant scale of activity and those using particularly innovative approaches, but the web survey utilised a broader group of agencies as described earlier.
Web surveys are a relatively new form of survey, but emerging research literature suggests that they provide results that are comparable with those of traditional surveys and are often of higher quality, and that they have the additional benefits of speed of data collection, reduced cost and reduced potential for human error. 40–42 The research literature does suggest that web surveys may encounter response problems when they are conducted with groups of people who are infrequent or unconfident users of the internet. 43 However, the recipients of the web survey in this study were a highly information technology (IT)-literate group of individuals who worked in largely desk-based jobs at research-related organisations: key research funders, major research producers and key research intermediaries. Thus, the web survey was an appropriate research method to use with this group.
The web survey was drawn up by the research team, drawing on the published reviews and on the emerging data from the review of websites and the depth interviews. The web survey consisted of a brief section on agency type (covering the geographical location of the organisation, the sector and whether the organisation was predominantly a research funder, producer or intermediary), followed by six main sections which are briefly described below. The final version of the survey is available at www.st-andrews.ac.uk/business/km-study/documents/kmstudy-text-of-web-survey.pdf.
Terminology
The first section, on terminology, explained that the survey was ‘using the term knowledge mobilisation to cover activities aimed at sharing research-based knowledge’ and asked respondents to indicate all other related terms commonly used in that organisation (e.g. knowledge transfer, knowledge translation or research use). The list of terms was drawn from the key reviews.
Knowledge mobilisation activities
This section, consisting of three questions, provided a list of activities that might be carried out as part of a knowledge mobilisation strategy and asked respondents to indicate which of these activities were carried out in that organisation and with what frequency (often; sometimes; planned for the future; never/does not apply). The list of activities was drawn from the reviews and from the interview data. It included (but did not signpost as such) some activities which might be associated with ‘push’ approaches, some that are related to ‘pull’, some which might be associated with ‘linkage and exchange’ and some which specifically involved patients or service users.
Models and frameworks used in knowledge mobilisation
The third section invited respondents to indicate which of a list of models and frameworks were used in developing knowledge mobilisation activities. The models and frameworks listed were those described most often in the key reviews and in the interviews. Respondents were invited to name any other frameworks the organisation used in developing its knowledge mobilisation activities.
Ideas around knowledge mobilisation
The fourth section asked respondents to indicate, on a Likert scale, the degree of agreement with key propositions drawn from the published reviews and from comments made in the interviews.
Developing knowledge mobilisation activities
The fifth section asked respondents to weight in terms of importance (very important; fairly important; not that important; do not know/does not apply) factors that might be considered in developing knowledge mobilisation activities (e.g. acceptability to stakeholders; attention to context; feasibility of evaluating the approach; previous use in that organisation). These factors were drawn from the reviews.
Evaluating the organisation’s knowledge mobilisation activities and measuring impact
A final section concerned the evaluation of the organisation’s knowledge mobilisation activities. Respondents were first asked to indicate whether or not the organisation evaluated their knowledge mobilisation activities (little/no formal evaluation; some evaluation; a comprehensive approach to evaluation) and then to indicate the relative importance of measuring different types of impact (e.g. change in users’ attitudes and intentions; impact on outcomes for service users). The types of impact were drawn from the reviews.
Finally, free-text boxes were provided for respondents to add additional comments or clarifications at appropriate points during the survey. A further free-text box was provided for any further comments at the end of the survey.
Reviewing, piloting and running the survey
International advisory board members (in conjunction with colleagues where possible) reviewed the draft survey and suggested changes which were then incorporated into the survey. The final version was piloted for comprehension and ease of use with a small number of colleagues who were familiar with the field but were not themselves due to receive the survey; these responses were not included in the survey findings.
The survey was sent out by e-mail to a named recipient in each of the 186 agencies. To deal with situations where individuals had more than one institutional affiliation, the covering e-mail specified which organisation was of interest. Recipients were asked to advise the research team if an alternative contact in that organisation would be more appropriate. In addition to careful piloting of the survey to assess comprehension and ease of completion, a range of further strategies was used to ensure a good response rate: sending a personalised invitation to participate in the survey; emphasising to participants the relevance of the study and the importance of participation; sending prompt follow-up reminders, together with an invitation to nominate an alternative contact if appropriate; and changing the wording of reminders rather than sending out successive reminders with identical wording. 44–48
A total of 186 contacts were sent the survey with a request for completion, and after two reminders (one after 2 weeks and one after a further 3 weeks) useable data were obtained from 106 of these giving an overall response rate of 57%. Fifty-one respondents responded to the first invitation, followed by 30 who responded to the first reminder and a further 25 who responded to the second reminder. The level of missing data was generally low (as shown in the results set out by question in Chapter 5), although, as explained in Chapter 5, around one-third of respondents skipped the question on models, theories and frameworks used in knowledge mobilisation. Analysis of the survey data involved compiling descriptive statistics (e.g. frequency counts of use of particular models or frameworks; extent of agreement/disagreement with key propositions) as well as thematic content analysis of free-text comments.
Workshops
We held two stakeholder workshops during the study. In line with the thinking behind the study (that there is no neat separation between knowledge production and knowledge use), these workshops were not used primarily as dissemination mechanisms but instead formed an integral part of the enquiry methods. The first workshop was held early in the study (month 6) and the second workshop towards the end (month 16). The aims of the workshops were to share the research thinking, emerging data and analysis, and future empirical and analytic direction, and thereby to identify (a) areas for refined data gathering, (b) clearer articulation of the mapping domains and supporting literatures, (c) creative ways of presenting findings and insights from our data, and (d) mechanisms for encouraging key actors in the UK to interact with our emerging findings. By actively engaging with and harnessing the insights of the potential research users in this way, we aimed to enhance the impact of the research by ensuring that our explorations and the outputs from the study were meaningful to, relevant for and readily usable by these organisations. Participants were invited from key research funders, major research producers and key research intermediaries in health care in the UK and from equivalent social care and education organisations. In addition, we sought participation by a number of key academic experts in the field (largely from the UK, but also with participation from some members of our international advisory board). To promote continuity, we invited participants from the first workshop to attend the second workshop. We were keen to ensure that the voice of service users involved in knowledge mobilisation in the agencies was also represented at the workshops and we therefore invited relevant agencies to send two participants to the workshops: a lay person (e.g. a service user/public member of a funding panel or board) and a member of staff.
The first and second workshops were attended by 28 and 35 participants respectively (plus the study team). A list of participant organisations at each workshop is given in Appendix 6.
The workshops used a range of interactive methods and processes to ensure the active engagement of all participants. Participants were actively encouraged to explore and critique the findings from their own organisational standpoints. The first workshop (month 6) provided an early opportunity to discuss the emerging findings and informed the subsequent data collection and analysis. The second workshop (month 16) provided an opportunity for further discussion of the data and for refining the study’s conclusions. The second workshop also enabled us to discuss with the stakeholders creative ways of engaging stakeholder organisations with the findings and insights from the study, including the potential for developing a practical tool from the study findings.
Engagement with the international advisory board
We recruited from the study team’s existing network of professional contacts an advisory board of international experts in the field of knowledge mobilisation (n = 8) to provide additional expertise and ‘insider’ knowledge throughout all stages of the study. Given that the intent of this study was primarily to inform developments in the health sector, all of these experts were drawn from this background. The composition of the advisory board is given in Appendix 7. Members were involved throughout the study. Via e-mail, telephone contact and in person, they advised on the theoretical aspects of the research; provided information on key agencies and key reviews; provided insights into the knowledge mobilisation strategies in their own countries; assisted with access to key agencies and individuals; commented on the emerging findings from the study and on the developing conceptual map; commented on the draft web survey; and participated in the two workshops.
Concluding remarks
While the various stages of data gathering and analysis have been set out sequentially, many aspects of data collection, collation, analysis and interpretation took place in parallel, and there was much interplay between the different strands. Subsequent chapters elaborate (where appropriate) on the detailed analytic approaches deployed to make sense of the data. The remainder of this report unfolds as follows. The next chapter (see Chapter 3) explains how we made sense of the conceptual material contained in the 71 reviews, leading to the creation of a ‘conceptual map’ of the field customised to the interests and perspectives of agency knowledge mobilisation. Chapter 4 describes the key findings from the interview data; Chapter 5 does the same for the web-based survey data. Chapter 6 develops an argument that there are identifiable in agency practices and accounts certain patterns that relate to different aspects of the conceptual map. We articulate these as specific ‘archetypes’ of knowledge mobilisation (eight in total). These are derived inductively and are presented for discussion. Finally, Chapter 7 assesses the implications of the findings for the development of more effective knowledge mobilisation strategies.
Chapter 3 Mapping the conceptual literature
Introduction
An early part of the initial desk research for this project involved a ‘review of reviews’ in the area of knowledge mobilisation. Our aim here was to identify and understand the main knowledge mobilisation models, theories and frameworks in health care, education and social care. Further, we also wanted to develop some accounts of the conceptual thinking that lay behind these framings. The search strategy used to uncover and select the relevant reviews is described in Chapter 2, and led to 71 reviews being included. These are listed in Appendix 2.
In reviewing this substantial set of reviews our aims were pragmatic and twofold. Our first was to develop a set of understandings that could be used for analytic purposes in exploring the data gathered in other parts of the study (websites review; depth interviews; web-based survey). This related to RQ 2a, which explores the logic(s) underpinning agency activities:
What models, theories or frameworks have been used explicitly – or can be discerned as implicit underpinning logics – in the development of the knowledge mobilisation strategies reviewed?
The second goal of the review work was to develop a mapping that might have utility to the agencies as they sought to find ways into the complex and growing conceptual literature, something that might be useful to them in developing their knowledge mobilisation strategies. This second goal will receive further development as part of our ongoing collaboration with agencies.
Having uncovered the 71 reviews, we used an inductive, iterative, dialogical process within the research team (and subsequently, with the advice of the advisory board) to distil the key domains (see Chapter 2). As the domains surfaced and were fleshed out (six in total), repeated reading of the reviews was used to provide an account of issues within each of the domains. This chapter provides an integrated account of these domains alongside a visual map.
Understanding the academic literature that links knowledge, knowing and change is a challenging and boundary-less task: relevant literatures sit in a wide range of disciplines (psychology, sociology, organisation studies, political science and more) and ideas appear and reappear within and across these disciplines in sometimes chaotic fashion. In making sense of these conceptualisations it is abundantly clear that debates have not proceeded in a wholly linear, cumulative or convergent manner. While there are some areas of widespread agreement, there are also areas where contestation, problematisation and conflict are evident.
In developing this review we have been guided by a number of framing choices:
-
We have concentrated on that literature which has itself attempted to review the field rather than seeking to collate and synthesise across primary work whether theoretical or empirical (i.e. this is a review of reviews).
-
We have focused on reviews in the three key areas of application of health care, social care and education.
-
We have sought to review work that specifically addresses the creation, collation, communication, implementation and impact of research-based knowledge (albeit that such knowledge is often seen in the context of other forms of knowledge/knowing).
-
We have provided an account that speaks to the action-oriented concerns of the agencies at the heart of this study (i.e. funders, major research producers and intermediaries); that is, we have kept in focus the agencies’ needs to develop practical knowledge mobilisation strategies and portfolios of specific activities.
-
We have sought to lay out the key fault-lines of debate rather than forcing order and convergence where the literature does not readily support this.
Reading across these reviews we first identify a wide range of models, theories and frameworks that have been used to describe and inform knowledge mobilisation. Looking at these models and the empirical work that has been carried out to explore and (occasionally) evaluate them, we can discern a number of insights for the effectiveness of particular approaches. We then read across these models and the wider conceptual literature to create a conceptual map that surfaces key issues, debates and conceptualisations. These are discussed under the six domains that emerged inductively from the set of reviews (see account in Chapter 2). Finally, we note the limited literature that has explored the roles that the public and service users can play in mobilising knowledge.
Models, theories and frameworks, and associated evaluative work
The review papers document a bewildering variety of models, theories and frameworks. Even within individual review papers, 60 or more distinct models are sometimes considered (e.g. Ward et al. ;19 Graham et al. 1), but the actual set included varied significantly between review papers. From this review of reviews, we extracted the key models that seemed to have potential for use in knowledge mobilisation work in the kinds of agencies that we were considering. Additional checking of this list with our advisory board and other knowledgeable experts in the field reassured us that we were capturing the main models of interest. The set of distinct models and frameworks are listed in Box 1 and are elaborated on further in Table 3. These formed the basis of some of our discussions with agencies (see Chapter 4) and the development of the web-based survey (see Chapter 5).
The Institute for Healthcare Improvement (IHI) Model for Improvement (Langley 199649).
Plan-Do-Study-Act (PDSA) cycles (Kilo 199850).
Ottawa Model of Research Use (OMRU) (Logan and Graham 199851).
The Promoting Action on Research Implementation in Health Services (PARIHS) Framework (Kitson et al. 199852).
Push, pull, linkage and exchange (Lomas 2000;10 Lavis et al. 200653).
Knowledge Dissemination and Utilisation Framework (Farkas et al. 200354).
Lavis et al. ’s framework for knowledge transfer (five questions about the research, four potential audiences) (Lavis et al. 200355).
Mindlines (Gabbay and le May 2004;56 Gabbay and le May 201157).
The Greenhalgh model for considering the diffusion of innovations in health service organisations (Greenhalgh et al. 200431,58).
The Levin model of research knowledge mobilisation (Levin 200459).
Walter et al.’s three models of research use (Walter et al. 200460).
The Knowledge to Action (KTA) Cycle (Graham et al. 20061).
Collaborative knowledge translation model (Baumbusch et al. 200861).
The Interactive Systems Framework (ISF) for Dissemination and Implementation (Wandersman et al. 200862).
The Knowledge Integration framework (Best et al. 200863).
The three generations framework (Best et al. 2008;63 Best et al. 200964).
The Consolidated Framework for Implementation Research (CFIR) (Damschroder et al. 200965).
The Critical Realism and the Arts Research Utilization Model (CRARUM) (Kontos and Poland 200966).
Normalisation Process Theory (May et al. 200967).
Participatory Action Knowledge Translation model (McWilliam et al. 200968).
Ward et al.’s conceptual framework of the knowledge transfer process (Ward et al. 200919).
The Knowledge Exchange Framework (Contandriopoulos et al. 201021).
The National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) KTA Framework (Wilson et al. 201169).
Knowledge translation self-assessment tool for research institutes (SATORI) (Gholami et al. 201170).
School Improvement Model (EEF71).
Model or framework | Key features |
---|---|
The IHI Model for Improvement (Langley et al. 199649) | The Model for Improvement asks three questions, which can be addressed in any order:
|
Plan-Do-Study-Act (PDSA) cycles (Kilo 199850) | PDSA cycles are a tool from the quality improvement field that has been applied to knowledge mobilisation activities. Originally developed by Shewhart as the Plan-Do-Check-Act cycle, PDSA cycles have four steps: Plan: plan the change to be tested or implemented Do: carry out the test or change Study: study the data before and after the change and reflect on what was learned Act: act on the information and plan the next change cycle The rationale for PDSA cycles comes from systems theory and the concept that systems are made up of interdependent interacting elements and are, therefore, unpredictable and non-linear: small changes can have large consequences. Short-cycle, small-scale tests, linked to reflection, are seen as helpful because they permit experimentation and enable teams to learn on the basis of action and its observed effects. |
The Ottawa Model of Research Use (OMRU) (Logan and Graham 199851) | The OMRU has six key elements to be assessed, monitored and evaluated before, during and after knowledge translation initiatives. The first three are assessed as barriers or facilitators: practice environment; potential research adopters; and the evidence-based innovation. The other three elements are research transfer strategies; evidence adoption; and health-related and other outcomes. The OMRU is an example of a planned change theory (i.e. one that views innovation adoption as a deliberately planned process). |
The PARIHS Framework (Kitson et al. 199852) | The PARIHS Framework was developed to address problems with linear models and their failure to account sufficiently for context. It proposes that successful implementation is a function of the relationship between evidence, context and facilitation. It can be operationalised as a practical tool through diagnostic questions and a grid-plotting device to assess ‘readiness’. |
Push, pull, linkage and exchange (Lavis et al. 2006;10 Lomas 200053) | This framing differentiates between: ‘push’ efforts, which are led by the producers or purveyors of research who push research out into policy or practice settings; user ‘pull’ approaches, in which research users ‘reach in’ to the research world to obtain information to help with a decision they face; and linkage and exchange approaches, which involve the development of a partnership between the producers or purveyors of research and the (potential) research users. |
Knowledge Dissemination and Utilization Framework (Farkas et al. 200354) | The Knowledge Dissemination and Utilization Framework is a typology of diffusion strategies intended to help researchers to integrate knowledge utilisation into the research process. It distinguishes strategies according to goal (increased knowledge; increased knowledge and positive attitudes; increased competence; increased utilisation over time) and audience (researchers; providers and administrators; consumers or families). |
Lavis et al.’s framework for knowledge transfer (five questions about the research, four potential audiences) (Lavis et al. 200355) | Lavis et al. provide an organising framework for a knowledge transfer strategy. The framework can be used to evaluate a strategy as a whole over long periods of time or to evaluate and fine tune elements of it over shorter periods of time. There are five questions that provide an organising framework:
|
Mindlines (Gabbay and le May 2004;56 Gabbay and le May 201157) | This theory proposes that health professionals combine knowledge and information from a wide range of sources into ‘mindlines’ (internalised, collectively reinforced tacit guidelines), which they use to inform their practice. Diverse forms of knowledge are continually being built into these mindlines and result in ‘knowledge-in-practice-in-context’. The authors propose that closer, more realistic relationships between the worlds of research, education and practice are needed to ensure that research findings feed into this ‘knowledge-in-practice-in-context’. |
The Greenhalgh model for considering the diffusion of innovations in health service organisations (Greenhalgh et al. 200431) | This major systematic review of 495 sources (213 empirical and 282 non-empirical papers) identified 13 distinct disciplinary research traditions that may be relevant in considering factors affecting dissemination and implementation. The conceptual model for considering the determinants of diffusion, dissemination and implementation of innovations is intended to be a memory aid for considering the different aspects of a complex situation (‘illuminating the problem and raising areas to consider’) and not a prescriptive formula. The authors note that the components do not represent a comprehensive list of the determinants of innovation: they are simply the areas on which research has been published. |
The Levin model of research knowledge mobilisation (Levin 200459) | Levin’s model emphasises three kinds of contexts for the use of research: one in which research is produced, one in which research is used and the third that consists of all of the mediating processes between the two. There are multiple dynamics at play within each context and many connections between the contexts. Some people or groups operate in more than one context. The model sees research use more as a function of systems and processes than of individuals. |
Walter et al.’s three models of research use (Walter et al. 200460) | The three models of research use proposed by Walter et al. and derived from a review of the social care field are:
The review found that evidence of the effectiveness of the three models is largely absent and that there is little evidence about potential barriers and enablers to their development. |
The KTA Cycle (Graham et al. 20061) | The KTA cycle is based on a review of 31 different KTA frameworks. It illustrates the key requirements of linkage and exchange between knowledge development and implementation and makes explicit the action-reflection process. This framework divides the KTA process into two concepts, knowledge creation (funnel) and action (cycle), with each concept comprising ideal phases or categories; in reality the boundaries between the two concepts and their ideal phases are fluid and permeable. The action phases derive from planned-action theories, frameworks and models (i.e. deliberately engineering change in groups that vary in size and setting). Both local and external knowledge creation or research can be integral to each action phase.
|
Collaborative knowledge translation model (Baumbusch et al. 200861) | The collaborative knowledge translation model is based on a collaborative relationship between researchers and practitioners. It consists of a knowledge translation cycle (the process dimension) and a sequence of collecting, analysing and synthesising knowledge (the content dimension). |
The Interactive Systems Framework (ISF) for Dissemination and Implementation (Wandersman et al. 200862) | The ISF provides a heuristic for understanding systems, functions and relationships relevant to dissemination and implementation. The authors argue that traditional research-to-practice models tend to focus on separate functions in dissemination and implementation rather than on the infrastructures or systems that support and carry out these functions. The ISF uses aspects of research-to-practice models and of community-centred models. It suggests three systems:
The authors suggest that the framework can be used to identify the key stakeholders and to consider how they could interact. |
The Knowledge Integration framework (Best et al. 200863) | Best et al. define knowledge integration as ‘The effective incorporation of knowledge into the decisions, practices and policies of organizations and systems’ (p. 322).63 The Knowledge Integration framework builds on the National Cancer Institute of Canada (NCIC) 1994 cancer control framework and looks at the types of science (basic, clinical and population) and the domains of inquiry (individual, organisational and system/policy). Best et al. comment that, although this is not explicit, the framework recognises the field as a complex adaptive system (i.e. options must reflect local context while guiding overall system change). The authors argue that a systems-oriented approach to knowledge into action is lacking from other models. The model attempts to provide guidance that will align knowledge to action strategies across individual, organisational and system levels, recognising that actors in knowledge mobilisation processes occupy positions in systems that affect their outlook and behaviour. The model is operationalised as a nine-cell matrix characterising the philosophy and some possible implementation strategies appropriate to three domains of inquiry (individual, organisational and system/policy) and three types of science (basic, clinical and population). |
The three generations framework (Best et al. 2008;63 Best et al. 200964) | Best et al. suggest that there have been three generations, or types, of models: linear (dissemination, diffusion, knowledge transfer, knowledge utilisation), relationship (knowledge exchange) and systems (knowledge integration). The authors define knowledge integration as ‘The effective incorporation of knowledge into the decisions, practices and policies of organisations and systems’.63 |
The Consolidated Framework for Implementation Research (CFIR) (Damschroder et al. 200965) | The CFIR consolidates 19 different conceptual frameworks relevant to implementation research and suggests that implementation is influenced by implementation characteristics, the outer setting, the inner setting, the characteristics of the individuals involved and the process of implementation. The starting point for this framework was the 2004 synthesis by Greenhalgh et al. 2004;58 the CFIR builds on this work by reviewing 17 other models that were not specifically included in the Greenhalgh review. The CFIR has five major domains with their own constructs: intervention characteristics (eight constructs), outer setting (four constructs), inner setting (12 constructs), characteristics of the individuals involved (five constructs) and process of implementation (eight constructs). The framework suggests that implementation may necessitate the use of a range of strategies that exert their effects at multiple levels. |
The Critical Realism and the Arts Research Utilization Model (CRARUM) (Kontos and Poland 200966) | The CRARUM is an adaptation of the OMRU, borrowing additional concepts from critical realism (generative mechanisms, structural and agential powers) and incorporating arts-based methodologies with the potential to foster critical awareness and transformation. It is a more dynamic model than OMRU, with more room for agency by the adopters of research. |
Normalisation Process Theory (May et al. 200967) | Normalisation Process Theory draws on social constructivism and implementation theory. Its key elements are the social organisation of the work (implementation), making practices routine elements of everyday life (embedding) and sustaining embedded practices in their social contexts (integration). It defines activities, mechanisms and actors’ investments that are crucial to the outcome of an implementation process. It can be operationalised as a planning or evaluation tool by means of questions about the work required for normalisation and the actors who accomplish it. |
Participatory Action Knowledge Translation model (McWilliam et al. 200968) | The Participatory Action Knowledge Translation model combines a social constructivist approach with elements of the PARIHS Framework and the KTA framework. It takes the components of PARIHS (evidence, context, facilitation) and the idea of a dynamic relationship between ‘science push’ and ‘demand pull’ from the KTA framework. |
Ward et al.’s conceptual framework of the knowledge transfer process (Ward et al. 200919) | This conceptual framework of the knowledge transfer process emerged out of a review of 28 different models which seek to explain all or part of the knowledge transfer process. Five common components of the knowledge transfer process were problem identification and communication; knowledge/research development and selection; analysis of context; knowledge transfer activities or interventions; and knowledge/research utilisation. These five components are connected via a complex multidirectional set of interactions (i.e. the individual components can occur simultaneously or in any given order and can occur more than once during the knowledge transfer process). The review identified three types of knowledge transfer processes: a linear process, a cyclical process and a dynamic multidirectional process. The authors suggest that their framework can be used to gather evidence from case studies of knowledge transfer interventions; they comment that at present it is analytically and empirically empty: it does not contain details about the relative importance or applicability of each of the five components or details of the practical actions which can be associated with the components. Note that the authors subsequently tested the framework empirically and refined it.37 A revised diagram of the framework was created illustrating the point that the five components could occur separately or simultaneously and not in any set order and illustrating some of the possible connections between them. |
The Knowledge Exchange Framework (Contandriopoulos et al. 201021) | The Knowledge Exchange Framework focuses on collective knowledge exchange systems (i.e. interventions in organisational or policy-making settings rather than those aimed at changing individual behaviour) and on active knowledge exchange interventions (rather than passive information flows). The framework has three basic components of knowledge exchange systems: the roles of individual actors in collective systems; the nature of the knowledge exchanged and the process of knowledge use and situates knowledge exchange interventions in larger collective action systems with three core dimensions: polarisation; and cost-sharing and social structuring. |
The NCCDPHP’s Knowledge to Action Framework (Wilson et al. 201169) | This framework emerged out of the search for a common language between scientists and practitioners at the US Centers for Disease Control and Prevention. The framework is informed by diffusion theory, which distinguishes three phases of implementation: research, translation and institutionalisation. Each of the three phases is underpinned in the framework by general and intervention-specific supporting structures (e.g. organisational capacity, champions, financial resources, technical assistance, leadership and political will). Evaluation (depicted by a ‘bar’ running across the whole framework) is also an important component of the framework: it is multifaceted, present through each of the three phases and inherent in each of the components. |
The knowledge translation self-assessment tool for research institutes (SATORI) (Gholami et al. 201170) | The SATORI was designed to assess the status of knowledge translation in research institutes through group discussions and consensus. This tool consists of 50 statements about requisites, resources and strategies for facilitating knowledge translation in research institutes. Use of the tool enables research managers and researchers to identify strengths and weaknesses of knowledge translation within their institution and to subsequently develop interventions that could improve their organisation’s knowledge transfer infrastructure and capacity. This tool was developed to assess knowledge translation activities from the ‘push side’ perspective. The four main domains and their subdomains are as follows:
|
School Improvement Model (EEF71) | The School Improvement Model is currently being evaluated by the EEF. It consists of five steps: Step 1: Decide what you want to achieve Identify school priorities using internal data and professional judgement Step 2: Identifying possible solutions External evidence (e.g. in the EEF’s Toolkit) can be used to inform choices Step 3: Giving the idea the best chance of success Applying the ingredients of effective implementation Step 4: Did it work? Evaluate the impact of your decisions and identify potential improvements for the future Step 5: Securing and spreading change Mobilise the knowledge and use the findings to inform the work of the school to grow or stop the intervention |
Major models, and the testing of these
The models listed are set out in chronological order of publication (see Box 1). They have diverse underpinnings and assumptions, and draw on distinct disciplinary concepts from psychology, sociology, organisation development, implementation science and political science. They vary in the extent to which they draw narrow or more inclusive boundaries around what counts as knowledge, and some differ in their primary areas of application (e.g. being either policy or practice focused).
Many of these models are primarily descriptive of the processes around knowledge creation/flow/application, and they tend not to be explicit about the necessary configurations, actions or resources that will underpin successful knowledge mobilisation. That is, they do not readily provide prescriptions for knowledge mobilisation strategy or operations. 80 In addition, with a few notable exceptions, the models have received only limited empirical testing. We briefly discuss here five of the frameworks from Box 1 that have been tested empirically to some degree: the Promoting Action on Research Implementation in Health Services (PARIHS) Framework;52 the Knowledge to Action (KTA) framework;1 the Ottawa Model of Research Use (OMRU);51 the Consolidated Framework for Implementation Research (CFIR)65 and the conceptual framework of the knowledge transfer process developed by Ward et al. 19
The PARIHS Framework does draw attention to important components of research use (such as the availability of credible evidence; or the presence of effective facilitation) but again, like other models, it is less specific about how these concerns translate into planned actions. It has, however, been subject to more evaluation than most of the frameworks we considered. A paper by Helfrich et al. 81 reviews 24 articles on this framework (six core concept articles from the original PARIHS authors and 18 empirical articles). The authors note that no studies used the framework prospectively to design implementation studies and suggest that this is an important omission in relation to this and other frameworks. The papers reviewed suggested a number of improvements to the framework, for example that the subelements could be specified more clearly. A paper published the following year82 builds on the critical synthesis of the literature81 and suggests revisions and a set of tools that researchers can use to apply the revised framework prospectively and comprehensively.
A later research paper83 provides a rare example of the prospective use of the framework: the framework was prospectively applied to guide decisions about intervention design, data collection and analysis processes in an implementation trial focused on reducing perioperative fasting times. In evaluating this use of the framework, the authors concluded that, although individuals are implicitly included in the three elements of evidence, context and facilitation, the role of individuals needed to be explicitly added to the framework. They noted that the past decade has seen a shift away from a focus on individuals to a greater focus on context and how that affects implementation, but suggest that this risks downplaying the many individual level factors (e.g. beliefs, attitudes, values, motivations, etc.) that can have an impact on the behaviour of individuals and groups/teams. In their study using the framework, they found that the interaction between the three elements of evidence, context and facilitation was influenced by both individual and team behaviour.
The KTA framework developed by Graham et al. 1 is one of the few that has been evaluated as a model for planning and evaluating knowledge mobilisation strategies. For example, published studies describe the use of the model for the development and implementation of interprofessional protocols84 for the development of a children’s health participatory action project85 and for the implementation, monitoring and evaluation of a strategy for mentorship in academic medicine. 86 However, although these studies pointed to some benefits from using the KTA framework, its theoretical basis (planned action theories) has not been separately evaluated for its adequacy as an explanation of underlying mechanisms, and so the model as a whole – like most others – remains not yet fully substantiated. 19
The OMRU has featured in some empirical studies. For example, a comparative case study of technology adoption in hospital settings87 used parts of OMRU (characteristics of innovation, characteristics of potential adopters, characteristics of environment) to guide question and topic selection for interviews and for data coding. The study explored discrepancies between awareness and adoption, highlighting the role of champions, resources, ways of reaching consensus and willingness to take risks, but did not explicitly attempt to validate OMRU. A study the following year85 initially tried to use OMRU in a children’s health participatory action project but found it to be difficult to apply in participatory action research and in a community setting.
Another of the frameworks that has been empirically tested is the CFIR,65 which was tested using a post hoc, deductive analysis of 11 narrative accounts of innovation in health-care services and practice in England. 88 The authors suggest that their study may be one of the first evaluations of the framework. They developed a matrix comprising the five domains and 39 constructs of the framework to examine the coherence of the terminology, to compare results across contexts and to identify new theoretical developments. They comment that some concepts (in particular leadership) seem underdeveloped in the framework and that important elements are missing from it, including scale-up, spread and sustainability, and the engagement of patients and members of the public.
Finally, the conceptual framework of the knowledge transfer process developed by Ward et al. 19 from 28 different models of the knowledge transfer process was evaluated in use by the authors through a study of a knowledge broker intervention in a large UK mental health service. 37 The knowledge broker intervention included three types of knowledge exchange activity: information management; linkage and exchange; and capacity building. Data from the fieldwork were then used to revise the original framework. Revisions to the framework included the following: tighter definitions of each of the five components; noting that all five components occurred on multiple occasions within each team and that at times multiple components were relevant simultaneously; placing greater emphasis on actively exploring the influence of contextual characteristics; and noting that an exclusive focus on one type of knowledge use (e.g. instrumental only and not instrumental and political) seemed to constrain the spread and sustainability of knowledge. A revised diagram of the framework was created illustrating the point that the five components could occur separately or simultaneously and not in any set order and illustrating some of the possible connections between them.
Looking across these empirical studies, we can see that they provide useful accounts of these five models in use. However, none of the models have been comprehensively evaluated, and the majority of the other models in the literature have been subject to even less empirical testing. Indeed, given their descriptive rather than prescriptive orientations, verification and validation may be more realistic prospects than evaluative testing.
The challenges of evaluation
Models apart, what of the evaluative work around specific actions, interventions and mechanism for knowledge mobilisation? One observation that runs deep throughout the literature is that measuring knowledge use89 and assessing what interventions promote that knowledge use are in their infancy. 13 There are only a small number of implementation studies of specific knowledge mobilisation mechanisms, and many of these are of poor quality. 18 For example, many studies fail to define what they mean by research use or to define outcome measures clearly;13 the validity and reliability of the outcome measures are rarely reported;90 subjective measures of research use are commonly used; and many studies are retrospective, thus risking recall bias and incomplete data. There is, therefore, a lack of practical guidance or robust empirical evidence on many of the likely components of knowledge mobilisation strategies. 18,21,91,92
Box 2 lists a number of areas where reviews have established that there is little in the way of an empirical support base; Box 3 draws on the work of the Cochrane Effective Practice and Organisation of Care (EPOC) group to list some types of intervention aimed specifically at professional behaviour change for which there is a growing body of evidence; and Table 24 in Appendix 8 augments these by reading across the major reviews to summarise the key observations made on evaluating knowledge mobilisation work.
-
Use of systematic reviews in policy-making.
-
Evidence-response mechanisms.
-
Knowledge mediation.
-
Knowledge management tools.
-
Deliberative dialogues.
-
E-media, including social media.
-
Informatics interventions that support knowledge mobilisation.
-
Research knowledge infrastructure.
-
Collaborative/exchange programmes.
-
Knowledge co-creation.
-
Application and influence of non-instrumental (conceptual) research.
-
Roles, responsibilities and activities of the different actors in knowledge mobilisation.
-
Sustainability of complex knowledge mobilisation interventions.
Derived from the following key reviews and sources: Greenhalgh et al. 2004;58 Walter et al. 2005;93 Mitton et al. 2007;18 Levin 2008;91 Tetroe et al. 2008;11 Contandriopoulos et al. 2010;21 Chambers et al. 2011;94 Perrier et al. 2011;95 Boyko et al. 2012;96 Murphy and Fafard 2012;97 Oborn 2012;92 and Pitchforth et al. 2013. 98
Printed educational materials.
Educational meetings.
Educational outreach.
Local opinion leaders.
Audit and feedback.
Computerised reminders.
Tailored interventions.
Derived from Grimshaw et al. 2012. 76
Despite such evidence gaps, many knowledge mobilisation interventions are promoted even when they have not yet been properly evaluated. 10,99 Indeed, the literature often promotes singular techniques relating to linear processes despite the emerging agreement about the limitations of such models in knowledge mobilisation. 21 Even where systematic reviews are available that point to discrete interventions aimed at professional behaviour change that do appear to have some effect (see Box 3), the key role played by the interaction between the intervention provider and the context means that it may be difficult to use interventions with confidence in other contexts:100
Externally valid evidence pertaining to the efficacy of specific knowledge exchange strategies is unlikely to be forthcoming . . . the best available . . . advice for someone designing a knowledge exchange intervention will probably be found in empirically sound conceptual frameworks that can be used as field guides to decode the context.
Contandriopoulos et al., p. 46821
Thus, a sound conceptual understanding of the various issues at play – and their dynamic interaction with context – becomes central to the design of effective knowledge mobilisation strategies. In the end, this may be more important than evaluative efforts in the knowledge mobilisation field that seek to isolate the impact of discrete interventions in an attempt to create a menu of ‘proven’ approaches. What may be as important is equipping agencies with the resources that enable them to conduct robust evaluations-in-context of knowledge mobilisation activities that are designed according to the ‘best available’ evidence. Our review of the literature identified some of the key papers and reports that can help agencies to consider how to measure research use and evaluate the impact of their knowledge mobilisation activities, and these are listed and summarised in Table 4.
Paper/report | Summary |
---|---|
Davies and Nutley. Learning More about How Research-Based Knowledge gets Used – Guidance in the Development of New Empirical Research (undated)101 | This discussion paper provides guidance on types of research use, conceptual frameworks on research use, approaches to assessing research influence and methodological issues and challenges. |
Popay and Collins. The Public Involvement Impact Assessment Framework Guidance (2014)102 | This guidance is primarily aimed at researchers who wish to assess the impact of public involvement in their research. It provides a framework for developing an impact assessment plan and can be used when research ideas are first being developed or in the context of ongoing research projects. |
Centre for Research on Families and Relationships and SMCI Associates. Knowledge into Action for NHS Scotland: Methods, Strategic National Projects and an Evaluation Framework – Report to NHS Education for Scotland and Healthcare Improvement Scotland (2013)103 | This report includes a literature review of evaluation approaches and outlines an evaluation framework based on contribution analysis. The evaluation framework includes a process map for evaluation, templates for the development of a project-level outcomes chain and a monitoring and review plan, a guide to indicators and a worked example. |
DFID. Research Uptake: A Guide for DFID-Funded Research Programmes (2013)104 | Intended as a beginners’ guide, the guide provides information on DFID’s approach to research uptake and some practical advice for designing a research uptake strategy and monitoring research uptake. |
Guthrie et al. Measuring Research: A Guide to Research Evaluation Frameworks and Tools (2013)105 | The guide provides a comprehensive list of evaluation tools and techniques. It discusses when each might be useful, examines the advantages and disadvantages of different approaches and looks at the context in which each of the 14 frameworks has been used before. The guide provides a decision tree for funders, policy-makers and researchers who want to evaluate research and need practical guidance on how to choose the appropriate approach. |
Kok and Schuit. Contribution Mapping: A Method for Mapping the Contribution of Research to Enhance its Impact (2012)106 | This paper describes a new approach to research monitoring and evaluation that aims to assess contributions instead of impacts. |
Banzi et al. Conceptual Frameworks and Empirical Approaches used to Assess the Impact of Health Research: An Overview of Reviews (2011)89 | The paper provides an overview of the most widespread frameworks for the evaluation of research impact, the dimensions of impact that they evaluate and the main indicators that they use. |
Bhattacharyya et al. Methodologies to Evaluate the Effectiveness of Knowledge Translation Interventions: A Primer for Researchers and Health Care Managers (2011)107 | The paper describes methodologies for evaluating the effectiveness of KT interventions and suggests how to choose between them and how to combine them with qualitative studies to assess mechanisms of effect. |
InSource Resource Group. The CAPTURE Project: Reviewing KTE Indicators and Data Collection Tools (2010)108 | This report gives the findings from a literature search of tools designed to measure KTE conducted in connection with the Canadian Partnerships Against Cancer CLASPs initiative. The authors found that there are very few examples of concrete tools for measuring KTE outcomes or successes. They proposed a three-part evaluation strategy for KTE for the CLASPs:
|
Straus et al. Monitoring Use of Knowledge and Evaluating Outcomes (2010)109 | This paper considers conceptual, instrumental and persuasive knowledge uses and outcomes (patient; provider; system/society) and gives examples of measures and strategies for data collection. The authors suggest that the MRC framework for the evaluation of complex interventions may be useful as it provides researchers with an iterative step-wise approach to evaluation. |
Panel on Return on Investment in Health Research. Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research (2009)110 | This report was sponsored by 23 different organisations including the Canadian Health Services Research Foundation. The aim was to investigate if there was a ‘best way’ (best method) to evaluate the impacts of health research in Canada and to consider whether there are ‘best metrics’ that could be used to assess those impacts or to improve them. Based on this assessment, the report proposes a new impacts framework and a preferred menu of indicators and metrics that can be used for evaluating the returns on investment in health research. |
Kuruvilla et al. Describing the Impact of Health Research: A Research Impact Framework (2006)111 | This paper describes the Research Impact Framework. It has four categories of impacts (based on a literature review and empirical analysis of selected research projects): research-related impacts; policy impacts; service impacts – health and intersectoral; and societal impacts. The authors list and describe in detail key descriptive categories within each category of impacts. They suggest that the framework could have a range of uses including:
|
Hanney et al. The Utilisation of Health Research in Policy-Making: Concepts, Examples and Methods of Assessment (2003)112 | This paper provides a comprehensive review of the literature on health policy-making and the potential for research to inform the policy-making process and considers methods for assessing research utilisation in health policy-making. |
In sum, then, despite the lack of focused and cumulative evaluation work (and the difficulties associated with so doing), a collective reading of what the reviews say about the models and about the wider conceptual literature does allow some insights to be gathered which can inform the development of knowledge mobilisation approaches by agencies. These form the basis of the conceptual mapping, to which we now turn.
Six domains of interest
Discussion within the team of the key issues emerging from the reviews, and iterative attempts to group these, led to the emergence of six broad domains (or conceptual groupings). These were:
-
purpose(s) and goals (implicit or explicit)
-
knowledge (of all kinds)
-
connections and configurations (between people; between organisations)
-
people, roles and positions
-
actions and resources available and
-
context of operation (different in kind from the other five domains, but influential and interactive with each of them).
A simple schematic representation of these domains is shown in Figure 1.
Each of these six domains is now discussed to mark out the contours of debate covered in the reviews within the purview of that domain. We present these domains as practical ways into a complex and sprawling literature rather than as definitive accounts of the complexities covered. As part of our ongoing work with knowledge mobilisation agencies, we will be considering what tools they would find useful to enable them to explore these domains and to build on these insights in developing their knowledge mobilisation strategies.
Purpose(s) and goals
In developing knowledge mobilisation strategies, a central concern is to what end is the knowledge being mobilised? Agencies of different kinds will have different views on what they are trying to achieve (and how), and such concerns are central in the literature. While clearly closely related to the second domain (knowledge types and sources) and the fifth domain (actions and resources), a consideration of the purpose and goals of a knowledge mobilising agency can be informed by a wide array of literature.
There are multiple definitions of knowledge use in the literature, with no one definition dominating. 21 Underneath some of these disputes is a concern to tease out the extent to which research-based information is dominant in the aims of knowledge mobilisation efforts. That is, do we wish to see evidence-based actions, evidence-informed actions or just activities that are evidence-congruent or evidence-aware? The centrality or otherwise of research is a key consideration for agencies, then, and this is an issue picked up further when we consider types of knowledge.
Given a central and somewhat privileged position for research-based knowledge, one common and recurring typology55,90,113 suggests that there are three main types of research use: instrumental, conceptual and symbolic.
-
Instrumental/direct use: applying research findings in specific and direct ways to influence decision choices.
-
Conceptual/indirect use: using research results for changing understanding or attitudes, including introducing new conceptual categories, terminology or theories (sometimes called ‘enlightenment’ uses). 114
-
Symbolic/political/persuasive use: using research findings to legitimise and maintain predetermined positions, including the ‘tactical use’ of research, for example justifying inaction while awaiting further study.
These different types of use may be affected by many different factors. For example, direct use is associated with well-defined decision-taking; conceptual use may be longer term and more percolative in style; and symbolic use may be closely associated with political argument and the use of mass media. It is likely, however, that all three will be seen simultaneously in some settings (e.g. in government agencies).
Categorisations such as that above can be further refined by additional analysis within each category. For example, instrumental use may be concerned with decisions that impact on professional processes, individual patient outcomes or aggregate economic outcomes,115 and each of these can become targets or goals of agency activities. An alternative view55 might look for impacts on knowledge-use processes (such as knowledge being seen, discussed and cited), intermediate outcomes (such as key actors’ awareness and attitudes) and decision outcomes (evidence-supported change). Combinations and hybrids of these typologies of use and impact are, of course, possible, and a consideration of these may help to sharpen agencies’ strategic goals.
The categorisations above are related to, and have some overlap with, the influential typology produced earlier by Weiss. 114 Although based on seven ‘meanings’ of research use, these models could also be interpreted as types of use in themselves. While Weiss’ typology emerged from empirical work on policy-makers, it has also influenced thinking on research use in practice contexts. 15
Weiss argued that there were seven different categories in the way that research-based knowledge was used (adapted and extended from the original):
-
Knowledge driven: where research produces knowledge that might be relevant to public policy decisions. This is closely aligned with the direct and instrumental use noted above (and is central to strategies of research ‘push’) but can also encompass more conceptual uses of research where the knowledge shared is more theoretical or conceptual in nature.
-
Problem-solving: where research is sought out that can provide empirical evidence to help solve contemporary policy problems. This again is most often associated with an instrumental approach (and with notions of research ‘pull’), but can also encompass some conceptual rethinking.
-
Interactive: those engaged in policy or practice seek information from a variety of sources to help make sense of their problems and develop solutions; associated with ‘linkage and exchange’ approaches to knowledge mobilisation; may encompass both direct and indirect uses of research.
-
Political/symbolic: policy-makers and others search for knowledge to help justify their positions, and so research-based information becomes ammunition for whichever side finds it most useful. Intermediary agencies with well-defined value sets may seek to exploit such opportunities.
-
Tactical: where those who could be (and perhaps should be) research users fund or require new research to avoid taking action. Funders and research producers may each see opportunities here for longer-term gains in research-based knowledge.
-
Enlightenment: where research has gradual influence over time in shaping conceptualisations of the issues and framings of the policy agenda. Recognition of these slower percolative processes may lead agencies into planning for the longer term by focusing on social processes.
-
Societal: where policy interest, public concern and professional interests are meshed and stimulated by new research findings. Such a broader view enlarges the scope of agencies to encompass much broader sets of stakeholders.
Other commentators have sought to expose and critique the limited areas of application for research-based knowledge. Drawing on Habermas’ framework of ‘knowledge-constitutive interests’, Murphy and Fafard97 define three types of knowledge use:
-
instrumental, that is, problem-solving (using research to make decisions)
-
hermeneutic, that is, explanatory (facilitating greater understanding of the social world)
-
emancipatory, that is, equity-seeking (critically analysing institutional forces as a means of advancing social justice).
While having overlaps with other categorisations of use, the authors here suggest that most conventional knowledge mobilisation approaches focus on instrumental, problem-solving use. They suggest that social research has, or should have, other goals that are aimed at hermeneutic and emancipatory objectives. Such a reading brings to the fore political considerations whether an agency seeks to work with the grain of existing policy and practice presumptions, or to challenge these from a values-based position. 15,29,116
Moving into more practice-based models of research use, Walter et al. 60 propose three models. These are described as inductively derived archetypes, and actual practices often combine more than one model:
-
The research-based practitioner model of research use: in this model it is largely the responsibility of the individual practitioner to keep abreast of research and ensure that it is used to inform their practice.
-
The embedded research model of research use: in this model research use is largely achieved by embedding research in the systems and processes of practice (e.g. via standards, policies, procedures and tools).
-
The organisational excellence model of research use: in this model, the key to successful research use largely rests with the leadership, management and organisational arrangements of service delivery organisations and with their promotion of a ‘research minded’ organisational culture.
The models are not seen as prescriptions for action but may be useful at different times and in different service settings. 117 Understanding of these archetypes of use may help agencies direct activities supporting the uptake of research in practice.
So, for agencies developing knowledge mobilisation work, it may be important to consider the types of ‘use’ they are aiming to influence and to what end. Such considerations do not stand alone, but are interconnected with the other domains mapped in Figure 1, most especially that concerned with understanding the nature of knowledge and the role played by research within that.
Knowledge
Although at first sight it might seem obvious what knowledge mobilising agencies are seeking to mobilise – that is, research-based knowledge – in fact the reviews we explored reveal considerable complexity and nuance to the nature of that knowledge. Both descriptive and prescriptive models see unpacking what is meant by knowledge as a central concern. For example, one well-known framework55 sets out five questions for research organisations to consider in relation to knowledge mobilisation. The first question in this framework concerns what should be translated, transferred or exchanged: the knowledge of knowledge mobilisation.
There is no clear dominant definition of knowledge in the literature. 21 The different philosophical paradigms on which the knowledge mobilisation models draw (explicitly or implicitly) make different assumptions about the nature of knowledge. For example, positivism assumes that knowledge can be uncovered and expressed in generalisable laws, constructivism holds that knowledge is socially constructed and that there are multiple truths, and critical theory analyses the relationship between knowledge and power.
A range of types of knowledge are identified in the literature, and these can be grouped in various ways. For example, types of knowledge can be grouped according to the source: does the knowledge arise from structured data gathering (empirical knowledge), from practical experience (experiential knowledge) or from abstract discourse and debate (theoretical knowledge)? Another grouping in the knowledge mobilisation literature contrasts explicit knowledge (such as can be set down in guidelines) and the tacit knowledge held by individuals and groups. 118 Tacit knowledge may be used to inform decisions in the practice setting but may not be susceptible to defining and describing explicitly. Amalgamations of explicit and tacit knowledge in clinical contexts have been referred to as ‘mindlines’. 57
One theory of knowledge creation119 holds that there is a close relationship between tacit and explicit knowledge. It suggests that new knowledge is created most rapidly when conversion between different forms of knowledge occurs continually (e.g. from tacit to explicit and from explicit to tacit). Given this, Oborn et al. 30 suggest that the negotiation and exchange of tacit knowledge in practice settings has been largely overlooked by knowledge mobilisation researchers so far. A related categorisation, drawing on work by Aristotle, distinguishes between episteme (scientific knowledge), techne (craft knowledge) and phronesis (situation-specific practical wisdom and the ability to apply generic knowledge to the current case). 24
In another framing, institutional knowledge is differentiated from individual knowledge, and local knowledge from external knowledge. Such distinctions are made based on whether the knowledge arises from individual or shared experience, and whether the knowledge is created in situ or is imported from elsewhere. A further stream distinguishes between knowledge as data and knowledge as ideas, asserting that data, information and knowledge lie on a continuum and differ in the extent to which human processing and judgement are needed. 120 Such literature also considers the extent to which knowledge has been processed, synthesised, ‘recycled’, reinterpreted or adapted; and if the knowledge is specific to a particular issue and context, or whether or not it is more general. Similar notions underpin the ‘knowledge to action’ framework. 1 Here, knowledge creation is composed of three phases, each involving a greater degree of processing: knowledge enquiry (first-generation knowledge), knowledge synthesis (second-generation knowledge) and the creation of knowledge tools such as practice guidelines and algorithms (third-generation knowledge).
Methodological categorisations of research-based knowledge also abound. 121 This is not just a distinction between quantitative and qualitative findings, but there exist a variety of more-or-less hierarchical distinctions, often with implicit or explicit endorsements as to their validity. 122,123 More prosaically, there is debate about whether or not single studies should be disseminated at all, rather than synthesised accumulations across portfolios of work. 55,124
Knowledge, or knowing?
But should we even be talking of knowledge as a separate isolatable ‘thing’? If knowledge is seen as socially embedded then separating ‘it’ from its context begins to look problematic. Perhaps, instead, we need to think more of knowledge-in-context – or ‘knowledge-in-practice-in-context’, as Gabbay and le May describe ‘mindlines’. 57 Such considerations lead to a series of challenging questions: who is (or should be) involved in setting the research agenda and in deciding what issues warrant the production or collation of research-based knowledge? Who is involved in producing that knowledge and what are the power dynamics around what is defined as knowledge?92,125 Who defines who are the relevant stakeholders and by what processes are they involved? Is such knowledge produced by research ‘experts’? Or is knowledge co-produced by potential users and researchers, and what are the benefits and disadvantages of this? Many of these issues link to those explored in the domains of ‘people and roles’ and ‘context’.
Several authors21,30,56,57 argue strongly from empirical study that research-based knowledge does not occupy a privileged position. Instead, it sits alongside and competes with other forms of existing, structured and contextualised knowledge (e.g. professional knowledge and professional judgement). It follows, then, that there is not a direct correlation between attributes of the knowledge (e.g. the internal validity of the research-based knowledge) and the likelihood of subsequent use. 21 For example, professional consensus-based guidelines may be valued more than research-based guidelines, despite having a weaker evidence base. There is, thus, an ecology of knowledge, where research-based knowledge must compete with other ways of knowing for influence.
Taken together then, these observations have a number of implications for knowledge mobilising agencies. First, they suggest that agencies may need to develop mixed portfolios of activities that are heavily shaped by the types of knowledge under consideration. Second, actionable messages for decision-makers may more properly be seen to come from syntheses and systematic reviews rather than from single studies. 55,124 This would suggest that research organisations should focus their research mobilisation efforts on bodies of research-based knowledge. Third, agencies may need to consider the difference between information or data and knowledge;126 these may require different kinds of interaction between researchers and users and hence different kinds of knowledge translation training and support. More challengingly, knowledge mobilisation agencies may need to consider how they can support the interaction and integration of different types of knowledge, including perhaps deliberative processes that seek to surface hidden assumptions and tacit knowledge.
Finally, although there may be no absolute correlation between the attributes of research-based knowledge and its subsequent use (as it competes with other forms of knowing in the local context), it is still important to consider the attributes of research that help to make it more conducive to uptake: for example, if the research-based knowledge is perceived by the potential users to be credible, accessible, relevant, based on strong evidence, legitimate and endorsed by respected opinion leaders. 93 Tailoring the format and presentation method of knowledge products to the intended users can also make the knowledge that they contain more accessible to potential users. 80,127
Connections and configurations
Mobilising knowledge is about making connections. Our agency-based view of activities brings to the fore the need for agencies to connect and communicate in sometimes new and innovative ways. This may mean capitalising on existing networks or building anew. The reviews we uncovered offer insights into what and how such strategies might be developed.
Much of the literature we uncovered discusses the complex institutional, professional and social environment within which knowledge is created, flows (or gets stuck) and is applied. While some of these discussions lay heavy emphasis on ‘context’ as a mediator (which we discuss later in this chapter), there is also more specific consideration of the role of specific networks of interests or the practical configurations of agencies, organisations and relationships. As such connections and configurations are amenable to planned intervention or influence by agencies, it seemed important to tease out literature preoccupations here.
A framework that is increasingly well known (and resonant with other framings) is the ‘three generations’ framework. 20 This proposes that there have been three stages or generations of thinking about knowledge to action processes: linear approaches, relationship approaches and systems approaches. The authors set out the characteristics of each of these approaches and suggest conditions under which such approaches might be more or less appropriate. While these approaches are often linked to historical developments, with ideas of progression of thinking from ‘simple’ linear models to ‘complex’ systems thinking, it may be more helpful to think of these as parallel models of the knowledge mobilisation system with contingent application and different strengths and weaknesses.
Linear models of research flow
Linear models of connectivity have dominated the literature, and such thinking can be seen underpinning many of the models and frameworks in use. The research-based practitioner model60 and the embedded research model60 (discussed in the ‘purpose and goals’ domain) are examples of linear models; many other models with ‘rational, linear’ assumptions can be found in the literature. 128 Sitting within the linear conception (and, to a lesser extent, within the relational view) is the ‘two communities’ perspective: the idea that there are two separate social worlds of knowledge production and knowledge application, and that there is limited interconnectivity between these. However, more expansive views of knowledge (as discussed earlier; see Knowledge) contribute to a weakening and in some cases a demolition of such neat categorisations.
Despite being very widely used in health care and elsewhere,20,128 linear models have received significant critique: they tend to see ‘knowledge’ as a transferable product; they place much emphasis on individuals and their rational cognitions; and they fail to address notions that knowledge is translated into practice in a social, collective and situated manner. 15,30,32 An additional concern is that the evaluative research around knowledge mobilisation has tended to evaluate linear approaches rather than more complex forms,30 providing both symbolic and practical encouragement to organisations to continue to use these approaches.
Relationship models
A shift from linear approaches to more relational approaches has been observed in the health sector and generic management literatures after 2000. 16 One of the underlying premises of relationship models is that learning is a social and situated process. Relational models then tend to see knowledge mobilisation as having a political dynamic in which there is negotiation around competing meanings of ‘knowledge’ and ‘evidence’, and around issue framing and problem definitions.
In relationship models, the emphasis is on ‘linkage and exchange’,53 suggesting a greater degree of engagement with potential users than is implied with ‘push’ or ‘pull’ approaches. 11 The degree of engagement ranges from dialogue between researchers and practitioners through to collaborative engagement in producing research evidence (co-production) and in working together to implement evidence (e.g. in action research approaches or quality collaboratives). 129 A recent study,130 which may be the first to map the work of knowledge brokering organisations, found that the organisations carried out a wide range of brokering functions, including building partnerships, raising awareness, capacity building, implementation support and policy influence. Relational approaches emphasise ongoing, interactive processes of collaboration between research producers and research users around formulation of RQs, production of research evidence and sharing of research findings. 18,80 Relationship approaches draw on a range of theories, including principal–agent theory; communities of practice; social capital; organisational learning; sociocultural learning; and resource-dependence. 131 Key features of relationship approaches to knowledge mobilisation are an emphasis on accountability, reciprocity and respect for the other party’s knowledge.
A common critique of relationship approaches32 is that many models and approaches fail to fully acknowledge the implications of conflict over what constitutes knowledge, and give insufficient attention to meaning/power negotiations. Ferlie et al. 16 suggest that postmodern accounts that emphasise power are a further stage on from relational models. A further concern is that the relationships that are possible will depend on the skill sets and personalities of those involved; many researchers may feel most confident in talking about research findings to their academic peers. Such relationships are also affected by organisational turbulence: if there is high turnover in policy or practice (or academic) settings then it will be more difficult to develop ongoing relationships. 132 Some of these issues reappear when we discuss the domain of ‘people, roles and positions’.
Systems thinking
There is no consistent use of the term ‘systems thinking’, encapsulated by Best et al. 64 as an approach that ‘recognises that relationships are shaped, embedded and organised through structures that mediate the types of interactions that occur among multiple agents with unique rhythms and dynamics, worldviews, priorities and processes, language, time scales, means of communication and expectations’ (p. 628). There is, however, increasing support for the idea that health systems need to be seen as complex assemblages of interlocking networks that cannot be understood in terms of linear and ‘rational’ relationships but are instead conditional, contextual and relational. 133 Reviews suggest that, although the knowledge mobilisation literature is now beginning to embrace systems thinking, practical tools and strategies have yet to emerge. 64,133 In addition, reviewers suggest that there are many key aspects of a systems approach to knowledge that have not yet had sufficient attention, including the nature of evidence and knowledge, the role of leadership and the role of networks. 20 Exploring this further, Contandriopoulos et al. 21 suggest that there are three core aspects of systems that influence knowledge use within that system: polarisation (the extent to which the potential users share similar opinions and preferences); cost-sharing (the distribution between research producers, intermediaries and users of the resource costs associated with knowledge use); and social structures (e.g. formal and informal communication networks).
The evolution of thinking around connections and configurations has highlighted the limitations of ‘two communities’ thinking, suggesting that standard push approaches are unlikely to result in practice or policy change. Agencies that take a relational view, and that work within and through existing networks, or that seek to build new networks, can draw on a wide array of concepts and theories to help shape their actions. In doing so, they will need a nuanced understanding of the role of power, and insights from political science may be of some help here. Although there is increasing support for a systems approach in principle, a lack of practical tools and detailed guidance means that it has been difficult to operationalise these ideas into innovative knowledge mobilisation strategies.
As agencies devise new knowledge mobilisation strategies that capitalise on these insights, we can expect to see them work more within and through networks of interested parties. Sometimes these approaches will capitalise on, and aim to shape, existing networks (e.g. naturally occurring communities of practice); at other times agencies will seek to create and support new networks to further their goals. Fully exploiting the potential of a systems-based view of the world is currently hampered by a lack of operational models and convincing case examples. 20,27,134
People, roles and positions
Agencies interested in mobilising knowledge will act through co-ordinating the actions of their own people and through co-opting the skills and resources of others. In part, this is about the configurations and networks created or utilised as previously discussed, but within these we can discern distinct roles that are performed, by agencies and by individuals.
In this project, we have conceived of a threefold role-based typology of knowledge mobilising agencies: funders; research producers; and research intermediaries (of course, some actual agencies take on multiple roles and many exist in hybrid forms). To this we can add various types of ‘audiences’ for research:58 other researchers; members of the public and service users; practitioners; managers; and policy-makers. Several authors (e.g. Contandriopoulos et al. 21) point out that none of these are discrete categories and that individuals may belong to more than one group; this fact of multiple identities may constrain or facilitate an individual’s actions around knowledge mobilisation. A further categorisation of ‘insiders’ and ‘outsiders’ (e.g. researchers within government departments or external researchers from universities working with government departments) highlights the potential for individuals’ actions to be determined in part by their status in the social context. For example, studies have shown that internal researchers within government departments have greater access to ministers than do external researchers. 135
Moving on from this broad categorisation of audiences or stakeholder groups, Lavis et al. 58 suggest two key questions in defining a narrower group of stakeholders in a given situation: who can act on the basis of the research knowledge and who can influence those who can act? These questions helpfully give prominence to the issue of power, which many authors16,32 suggest has been neglected in relation to understanding of knowledge mobilisation.
There has been little empirical work on the actual or potential roles and responsibilities of different knowledge mobilisation actors. 11 There is, however, a strong focus in the literature on ‘knowledge brokers’ and other ‘mediator’ roles, and a growing number of empirical studies136–140 have investigated these roles. A range of functions has been suggested, including problem definition; research synthesis; facilitating access to research knowledge; developing outputs that are more accessible to users; and developing and brokering networks and other connections. 135,141 Linking and mediator roles have been promoted in many organisational settings and are perceived by health organisations to be an important component of the organisational infrastructure to encourage evidence use. 142 However, one review21 suggests that the structural position of brokers within organisations may mean that they have most scope to intervene in contexts where there is low polarisation of views (i.e. where actors already share similar views on key issues) and significant user investment in knowledge exchange, and that they may have limited ability to have an impact on the many existing networks that exist outside formal communication channels.
Conceptual uncertainty remains around who should perform knowledge broking and what activities should be encompassed by the role. 81,135 For example, it is unclear whether or not and in what ways knowledge broker roles are different from other roles such as opinion leaders, facilitators, champions, change agents or linking agents. 143 There are, nonetheless, some different theoretical assumptions behind the use of such terms: opinion leaders are typically seen as long-term ‘insiders’, whereas change agents are typically seen as ‘outsiders’ who have a short-term role in facilitating action around implementation. Knowledge brokers may come from diverse backgrounds, which then suggests roles that link or span different communities.
Many research funding agencies now make stipulations about what their funded researchers have to do in terms of knowledge mobilisation but these requirements are often limited to more traditional activities (e.g. engagement with potential research users, formal reporting and perhaps some ‘translation’ and dissemination). There is rarely any evaluation as to the effectiveness or impact of these requirements. 11 Indeed, some caution that researchers may not be the most appropriate people to undertake knowledge mobilising roles133 and that it is unrealistic to expect researchers to develop the broad range of skills required for effective knowledge sharing. 126
Leadership (including endorsement of the evidence from expert and peer opinion leaders) is regarded in the literature as important in knowledge mobilisation,93 but the requirements of roles here remain underspecified. Although leadership has been addressed in other literatures, the precise nature of leadership and its defining qualities have not been fully addressed in the knowledge mobilisation literature. 20
While roles matter, some authors argue that there has been disproportionate emphasis on individuals and their roles in relation to knowledge mobilisation. They argue that this focus ignores three key issues: that it is unrealistic to expect researchers to develop the broad range of skills required;126 that sustainable knowledge mobilisation requires multilevel systemic changes80 alongside appropriate technological and organisational infrastructures;78 and that greater attention needs to be paid to the organisational systems in which individuals work and which strongly affect what they are able to do. 77 Such critiques draw attention to other domains of our conceptual map.
One group that has largely been absent from the knowledge mobilisation literature is the public or service users. While patient and public involvement (PPI) has been strongly encouraged in research, the literature has been largely silent on the potential knowledge mobilisation role of these groups. One exception here is more recent work that has considered the evidence base on patient-direct and patient-mediated knowledge mobilisation interventions. 78,144 The gaps, challenges and opportunities of greater involvement of the public, patients, other service users, clients and parents is discussed in a subsequent section, but suffice to say that their role is not widely considered in the knowledge mobilisation literature.
In sum, agencies may find it useful to map their audiences or groups of stakeholders using the broad categories outlined above, differentiating between those who can take action directly (act on the evidence) and those who can influence those who can act, or those who can shape the context within which that action occurs. Moreover, intermediary roles need further elaboration and analysis that takes account of the other domains of the conceptual map (e.g. knowledge types; purpose and goals; existing networks and configurations; and local context). Currently, there is insufficient empirical evidence on the impact of knowledge brokers and other mediating roles, but early findings suggest that they are most likely to have impact when they are working to bring together previously isolated groups and are credible, skilled and well supported.
Actions and resources
To further their goals, agencies need not only to find partners and identify audiences, but also to develop action plans and deploy resources. The actions taken will depend on the underlying model of knowledge mobilisation being used (explicitly or implicitly), and the resource requirements differ for different models of knowledge mobilisation. The wide variety of models uncovered in this review have largely not yet been tested as prescriptions for practice, so it is not clear how suitable they are for planning and evaluating knowledge mobilisation strategies. 19 Many models provide a quite general overview of knowledge mobilisation rather than analysing the key features and intended effects of specific knowledge mobilisation interventions. 96 They thus leave unaddressed the specific actions required and the resources needed. Indeed, many models seem more descriptive of how change occurs rather than directly addressing the planning of change initiatives. 75
Some sets of activities have been identified in the literature that might form the first step in the operationalising of a knowledge mobilisation strategy. For example, Walter et al. 145 highlight the key underlying mechanisms that can be used to build research impact: dissemination; interaction; social influence; facilitation; and incentives and reinforcements. In practice, many strategies will involve a judicious mix of these, and selecting the appropriate mix and emphases remains to be addressed.
Taking a holistic view of encouraging research use, one review in social care60 sets out a collection of imperatives, each of which might suggest collections of (resourced) activities that need to be planned. These include ensuring a relevant research base; ensuring access to research; making research comprehensible; drawing out the practice implications of research; developing best practice models (e.g. pilot or demo projects); requiring research-informed practice (e.g. through regulatory influence); and developing a culture that supports research use (multifaceted). Again, these broad categorisations leave much detail that needs to be fleshed out by agencies in their local context and given their resource constraints.
Agencies wishing specifically to advance the field of knowledge mobilisation may draw on the five functional areas outlined by Holmes et al. 126 These include advancing the science of knowledge mobilisation; building capacity; managing specific projects; funding knowledge mobilisation activities; and advocating for greater knowledge use. Again, such categorisations may provide a starting point for agency strategy development.
Holmes et al. 126 drew on work by Kitson and Bisby to set out nine actions that funding agencies could take to support knowledge mobilisation:
-
require the involvement of research users throughout the research cycle
-
support activities to increase the ability of researchers to communicate with users
-
provide forums for knowledge users and researchers
-
require a knowledge mobilisation action plan for all funded projects
-
provide training and support to granting panels for the assessment of knowledge mobilisation plans
-
include knowledge mobilisation costs as eligible expenditures
-
fund activities that facilitate easier access to research data by knowledge users
-
require open-access publishing
-
fund rapid response programmes to address urgent policy and practice issues.
Agencies could use this list of imperatives to help set priorities for action. 126
The narrative synthesis of conceptual frameworks by Wilson et al. 146 reviewed 33 knowledge translation frameworks, of which 20 were designed to be used by researchers to guide their dissemination approaches. Twenty-eight of the frameworks reviewed were underpinned (at least in part) by one or more of three theoretical approaches: persuasive communication; diffusion of innovations theory; and social marketing. The authors noted that, although 10 UK funders of health services or public health research made reference to dissemination in their research funding application guides, only one [the Economic and Social Research Council (ESRC)] specifically provided a dissemination framework for use by funding applicants and grant holders. The authors suggested that, as a first step, funding agencies could specifically encourage the adoption of a theoretically informed approach to dissemination activities for their grant holders (e.g. by requiring the use of one of the theoretically informed frameworks). Another narrative review147 reviews 61 dissemination and implementation models and discusses key considerations around model selection and adaptation.
Other reviewers have noted79 that many organisations are increasingly developing practices (such as portals, websites and online interactive spaces) despite the limited evidence in support of such initiatives. Thus, many of the practical actions taken by agencies may have more to do with their face validity, stakeholder acceptability and the availability of local expertise than coherent strategy or supportive literature. 11
All of the above suggest that there are actions required by agencies across a number of spheres. This draws attention to the potential for balanced and multifaceted activities. While some reviews suggest that multifaceted approaches are more effective than single interventions,148 some authors suggest that multifaceted approaches may not always be appropriate: there is a risk of a ‘scattergun’ approach, and the effectiveness of multicomponent approaches will depend on the interaction of the different mechanisms within particular contexts. 117 One review of strategies used in public health149 found that simple or single strategies were in some cases as effective as complex multicomponent interventions, and suggested that this was because key messages might be diluted or harder to comprehend in complex multiple interventions. Multifaceted approaches are also likely to be more costly than single interventions and consideration needs to be given to how the different components might interact. 76
Disaggregating some of the above broad categories, a major review in 2012150 collated 68 specific implementation actions, grouped according to six key implementation processes: planning, educating, financing, restructuring, managing quality and attending to the policy context. The authors differentiated between discrete, multifaceted and blended implementation actions (blended was defined as the use of a number of discrete approaches, addressing multiple levels and barriers, interwoven and packaged as an implementation intervention with a brand or protocol). The review challenged the notion that there are only a limited number of strategic actions available, but as it was a narrative review no attempt was made to assess the methodological quality of sources or the empirical evidence for the actions listed. The resourcing of these strategies was also left unaddressed by this review.
An empirical study of research funding agencies in a range of countries, including Canada and the UK,11 found that many had minimal resources for knowledge mobilisation. Similarly, studies in academic institutions suggest that the majority lack the infrastructure resources to support knowledge use in policy and practice. 135 Yet, one of the key lessons from cross-sector review is that knowledge mobilisation requires financial, technical, organisational and emotional resources. 93 A widely shared assumption in the literature is that producers, intermediaries and users will invest and cost-share in knowledge mobilisation to the extent that they perceive such investment to be advantageous. 21 Yet, how such shared plans can be negotiated, or the basis on which they are founded, are rarely properly explored. One review of knowledge mobilisation approaches found that where there was no viable cost-sharing mechanism or where most of the burden fell on producers or intermediaries, significant research use (with the exception of political uses) was unlikely. 21
This account of the actions and resources needed for effective knowledge mobilisation has many implications for agencies. It draws attention to the wide array of actions needed, the breadth and diversity of actions available, the complex and vexed issue of resourcing these, and the need for coherent, interlocking and mutually reinforcing actions within and across agencies. It also draws attention to the significant gap between the articulation of a process of knowledge mobilisation (seen in many of the models, theories and frameworks) and the translation of those accounts into workable, practicable, properly resourced strategies. That is, much of the conceptual background reviewed in this chapter does not readily lend itself to the creation of action plans for agencies. At the heart of these difficulties lies an uncertainty about whose role it is to facilitate knowledge mobilisation. In some senses, effective knowledge mobilisation is a system property, and yet individual actors and agencies have to operate independently and are uncertain in co-ordination. Creating the conditions for shared goals, co-investment and co-ordinated actions remains a major challenge.
Context
Knowledge mobilising agencies are usually alert to the potentially facilitating or (more usually) inhibiting effects of the local environment on their efforts. While some see context as a ‘given’ that simply needs to be taken into account (‘context dictates the realm of the possible’),21 for others it is an active ‘ingredient’ in any successful knowledge mobilisation strategy. As Greenhalgh et al. assert: ‘the multiple (and often unpredictable) interactions that arise in particular contexts and settings are precisely what determine the success or failure of a dissemination initiative’. 31
The ‘context is important’ strand in the knowledge mobilisation literature has a long history in organisational research. 151 The processual-contextual perspective (e.g. the content, context and process framework)152,153 has informed a number of studies in the change literature in recent decades. 154 It is reflected in Pawson and Tilley’s well-known ‘CMO’ (context–mechanism–outcome) configuration in realist evaluation155 and in the PARIHS Framework,52 which was developed in part to address the lack of attention to context in earlier models. 72 The importance of context in quality improvement in health care and the key empirical findings from the literature have recently been explored in a publication for the Health Foundation. 151
In relation to influencing policy, a recent review156 also emphasises the importance of an analysis of context and refers to two frameworks: the ‘three Is’ of political science (institutions, interests and ideas) and to the framework proposed by Contandriopoulos et al. ,21 which considers issues in terms of their polarisation, salience and familiarity. Similar frameworks are available when looking at the uptake of policy interventions. For example, the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) framework20 emphasises that it is the broader contextual factors that influence adoption, implementation and maintenance and so decision-makers need to balance evidence on effectiveness against these factors.
Analytic approaches to context typically divide it into inner/internal and outer/external, although many authors emphasise that the interaction between these adds to the challenges of assessing and addressing context. Among the aspects of internal context that authors have suggested are relevant to knowledge mobilisation are organisational structures and processes (e.g. the impact of modes of governance on research use, conditions that affect the facilitation and prioritisation of research activity, including incentives and levers, the degree to which service user preferences are accurately known and prioritised); organisational cultures (e.g. the distribution of power between different groups, perspectives on whose responsibility it is to encourage evidence use, current norms and practices, the climate for innovation); and organisational facilities and resources (e.g. time, equipment). In relation to external context, the knowledge mobilisation literature emphasises three key related aspects: the social and political climate/culture; the degree of environmental stability; and the extent of interorganisational communication and norm-setting. In relation to social and political culture, authors have highlighted a range of issues, including the general climate for research production and use (e.g. how those who fund research, universities, researchers and research users support and value knowledge mobilisation efforts) and the influence of policy directives (e.g. the increasing requirement that schools attend to evidence about effective practice as part of continuing professional development leading to an increase in the number of school leaders taking on knowledge broker roles). 127
There is broad agreement, therefore,33,75,157 that context is an important (if poorly understood) mediator. It is a feature of many models of barriers to knowledge uptake,100 and analysis of context is one of the five common components shared by the majority of models of the knowledge mobilisation process. 19 Indeed, a large proportion of the knowledge mobilisation literature is made up of analyses of ‘barriers and facilitators’. 18 For example, there is strong evidence for a wide range of generic barriers to effective research impact, including, for researchers, a lack of resources, a lack of skills and an absence of professional reward for research impact activities and, for research users, competing organisational pressures, an organisational culture that does not value research, a preference for other sources of evidence and a suspicion that research may displace professional skills and experience. 145 In a similar vein, the Cochrane EPOC group classifies barriers to change into nine categories (information management, clinical uncertainty, sense of competence, perceptions of liability, patient expectations, standards of practice, financial disincentives, administrative constraints and a miscellaneous category). 76
Although identifying and addressing key barriers is recommended in many knowledge mobilisation models as an important consideration when choosing a strategy, many important barriers affecting knowledge use (e.g. difficulties arising from working in multiprofessional teams) are long-standing and complex, and are not actually easily addressed. 76 Moreover, this marked emphasis in the literature on understanding barriers and facilitators has been critiqued for leading to a narrow ‘technicist’ understanding of knowledge mobilisation rather than one that is attentive to knowledge mobilisation as an interactive and deeply situated process. 37
In contrast, the contextual approach taken by Ward et al. 37 proposes that a detailed understanding of local interpersonal interactions, shared experiences and networks may be particularly useful in considering how opportunities for knowledge mobilisation emerge or are constrained within an organisational setting. Thus, they see context as playing a dynamic and interactive role with local actions, not simply existing as a passive and inhibiting backdrop. This perspective emphasises the importance of assessing the existing ‘naturalistic’ knowledge exchange processes that are already occurring (e.g. in relation to other innovations or change programmes), and of building on these when planning formal knowledge mobilisation interventions.
It has also been noted that contexts are multiple rather than singular. Levin’s model of knowledge mobilisation75 refers to three types of contexts for the use of research: the context in which it is produced; the context in which it is used; and all of the mediating processes between these two contexts. Emphasis is thus placed on the multiple dynamics at play within each context. Other authors (e.g. Nicolini et al. 158) emphasise the extent to which any one sector (e.g. health care) will have different subsectors within it (e.g. clinical research, health services research, health policy) that may require different approaches to knowledge mobilisation. In that sense, contexts are not just multiple, parallel and perhaps overlapping, but are also nested. Indeed, the ‘complex adaptive systems’ perspective20,27,78 emphasises how the different levels of the system affect each other: interventions at one level are affected by, and affect, factors at other system levels. 35
There are differing views about the extent to which and how contextual factors can be managed or even influenced. Many authors (e.g. Greenhalgh et al. 31) emphasise that while context is important, it is also unpredictable and not easily controlled. Knowledge mobilisation activities are embedded within a system and changes will be sustained only if attention is paid to the factors that influence that system. 64 This rules out simple prescriptions for approaches which will apply in a range of contexts and points to the need to design, tailor, refine and evaluate any knowledge mobilisation approach with reference to the particular setting and alongside those who will be responsible for implementing the changes. 31,64 Advocates for an integrated knowledge mobilisation research approach (i.e. collaboration between researchers and knowledge users) emphasise that research knowledge has to be integrated with contextual knowledge (e.g. population data, local expertise, knowledge of the characteristics of the local setting) and that this integration is more likely to happen if the potential users are involved in the research process from the outset. 159
For agencies seeking to develop knowledge mobilisation strategies, then, a thoroughgoing and realistic evaluation of context remains central. However, while ‘context’ is a key heading in many models and frameworks of the knowledge mobilisation process, it is variably conceptualised and differentially understood. Moreover, there is divergence of view whether context is a passive (usually inhibitory) backdrop or a potentially modifiable and co-optable ‘resource’ for the knowledge mobilisation effort. What is clear is that it is inadequate to treat context as merely a catch-all term for all that is not modelled: such an approach will disguise vital issues such as goal misalignment, power disparities and political practices. As yet, however, tools to assess and disentangle the role of context in knowledge mobilisation are insufficiently developed.
In developing our conceptual map from the review of reviews we were struck by how little attention was given in the knowledge mobilisation literature to the ultimate recipients of public services: service users, their carers and families. It is to this element that we now turn.
Involving the public and service users in knowledge mobilisation
Although there has been increasing emphasis over the last two decades on the involvement of patients and members of the public in carrying out research rather than their being involved as only research subjects,102,160 the evidence base on the impact of such involvement is currently limited. 102,161 Specific consideration of these groups as one of the potential audiences for knowledge mobilisation interventions is not a strong feature of much of the research literature on knowledge mobilisation. This may be unsurprising given that it has been argued that PPI in health research in the UK at least has been relatively ‘invisible’ despite the clear policy driver. 162
Public deliberation methods (e.g. citizens’ juries) are increasingly being used in health policy and in health priority setting in the UK, the USA and Canada, but more research is needed to understand and measure their use and impact. 163 Some authors (e.g. Oxman et al. 164) have made suggestions for interventions to engage the public in evidence-informed policy-making through the mass media, through civil society groups (e.g. patient and carer organisations and statutory organisations) and as consumers, and for measures to enable them to have greater influence in these settings. 165 They note, however, that there is little evidence to date about the effects of public engagement in health policy.
In thinking about patients and members of the public as an audience for knowledge mobilisation interventions in the practice field, a recent review144 grouped these interventions into two categories. The first category is patient-direct knowledge translation interventions, which aim to influence patient outcomes directly (by improving patients’ knowledge and potentially thereby improving their health behaviours, their use of health services, etc.). This includes interventions such as mass media campaigns, patient decision aids or self-help groups. The second category is patient-mediated interventions in which the aim is to change the behaviour of health professionals and thereby affect patient outcomes indirectly; interventions in this category include coaching to enable patients to communicate more effectively with health professionals and question cards to prompt patients to ask specific questions during consultations with health professionals. The Cochrane EPOC group defines patient-mediated interventions as those involving the collection by patients of new clinical data (e.g. blood pressure readings) which are given to health professionals. Stacey and Hill144 have expanded this definition in the knowledge mobilisation context to include interventions targeted at patients that aim to improve knowledge use by health professionals; these are mainly interventions directed at improving patient–health professional communication.
Current evidence suggests that patient-direct interventions to improve patients’ health literacy have the most consistent positive effects on patients’ knowledge and to a lesser extent on their experience and use of health services. For example, question prompts and coaching have been shown to increase patients’ knowledge of their condition and their participation in clinical decision-making, while the use of patient decision aids can increase patients’ participation in decision-making, their awareness of the available treatment options and their perception that the subsequent decision about treatment was in accord with their values. 144 There is insufficient evidence as yet to show whether or not patient-mediated knowledge mobilisation interventions have an impact on health professionals’ behaviour. The substantive knowledge base used for any of these interventions (e.g. the evidence underpinning a decision aid or question prompts) is clearly critical to their being a mechanism for increasing knowledge use.
A systematic review of PPI in health and social care research166 found that, although the evidence base is limited, it is possible to identify a number of impacts that PPI has had on research and the research process, including on aspects that may contribute to subsequent knowledge use (e.g. the identification and prioritisation of research topics, improving the feasibility of research designs, better recruitment to studies and increased dissemination of results). The authors comment that there has been relatively little theoretical development or conceptualisation in this field and suggest that one of the priorities is to develop a comprehensive theoretical model and instruments that could be used to measure the impact of PPI. They note that there have been positive and negative impacts from PPI and suggest that PPI should be regarded as a complex intervention that requires due account to be taken of ‘what works, for whom, in what circumstances and why’. 155 Two HSDR programme-funded studies using a realist evaluation approach to assess the nature, processes and impacts of public involvement in research are currently in progress in the UK (www.nets.nihr.ac.uk/programmes/hsdr; study reference numbers 10/2001/36 and 10/2001/41).
Further progress has been made since that review was carried out. A recent large multiphase study on the impacts of user involvement in health and social care research has reviewed the evidence and conducted new empirical work on the values and impacts associated with public involvement. 161 The study has resulted in a framework to assist researchers in developing public involvement plans and assessing the impact of public involvement: the Public Involvement Impact Assessment Framework (PiiAF). 102
Our conclusion from the knowledge mobilisation literature is that the potential for involvement of the public and service users is currently underdeveloped in the field. Where attention has been given, this has largely been addressed at engaging the public and service users with the research production process – not with the communication, reinterpretation and actioning of any knowledge so created.
Concluding remarks
Taken together, the six domains of our conceptual map, the elaborations of arguments within each, and – most importantly – the interactions between them provide a dynamic account of knowledge creation, communication and action. Our subsequent empirical work (see Chapters 4 and 5) shows that different agencies are differently focused on the various domains, with varying assumptions and framings. Our suggestion is that more systematic investigation by agencies of all of the domains (and their interactions) may help them to surface assumptions, highlight tensions and create greater coherence.
Chapter 4 Findings from the interview data
Introduction
This chapter reports the findings from the 52 interviews with agencies in health care, social care and education. We firstly set out how agencies described their role in relation to knowledge mobilisation: how central knowledge mobilisation was to their activities and mission and whether their knowledge mobilisation approach emphasised research products, knowledge brokering or implementation. Secondly, we explore how they had arrived at this approach to knowledge mobilisation: what had driven the development of this approach and what factors had contributed to it. Lastly, we examine how agencies were evaluating their knowledge mobilisation approaches and what they had learnt from formal evaluations and from practical experience.
Throughout this chapter we use the term knowledge mobilisation. We explained to interviewees that we had chosen to use the term knowledge mobilisation in the study as a broad term to cover activities aimed at sharing research-based knowledge and encouraging its use in policy and practice. We asked interviewees whether or not that term was used in their organisation and what other terms were in common use (this issue is also addressed by the wider survey reported in Chapter 5). We found that, although some agencies were using the term knowledge mobilisation (including one agency that had specifically chosen the term because to them it was the one that suggested the least linearity), a range of other terms were more commonly used. These included getting evidence into practice, knowledge exchange, knowledge translation and variants of evidence-based and evidence-informed policy and practice. Some interviewees made specific comments on the term knowledge mobilisation: some felt that it was too narrow in its emphasis on ‘knowledge’ or that the term gave the wrong emphasis: ‘we don’t think of ourselves as a knowledge mobilisation organisation because it isn’t knowledge we’re trying to mobilise, it’s practice and policy’ (17).
Although many different terms were in use, there was considerable agreement that the language used around knowledge sharing or evidence use could cause communication difficulties and that it was very important in external communication to use language that organisation’s stakeholders could relate to:
What we’re trying to do is communicate in ways that will engage people and bring them in and allow them to see that the things we’re talking about are absolutely germane to their everyday activity . . . and we don’t want to get confused in layers of terminology.
13
One agency stated that they used the term knowledge mobilisation only when talking to academics, and that otherwise they tended to talk about research impacting on practice. In summary, many of the organisations in our study were well aware of the debates in the literature around terminology in this field but were more concerned about ensuring that in communicating with their target audience(s) they used terms that were relevant and accessible to these groups. The terms used by interviewees in talking to the research team may not, therefore, reflect those that they used in their own knowledge mobilisation activities.
How agencies defined their role in knowledge mobilisation
In this section we firstly consider how central knowledge mobilisation was to the organisations we interviewed. Secondly, we look at how they defined their role in knowledge mobilisation and consider these roles in relation to three broad categories: research products, brokering and implementation. Lastly, we discuss how agencies described their target audiences and, in particular, to what extent they saw service users, patients or other lay groups as part of their target audience.
The centrality of knowledge mobilisation
We were interested in the extent to which interviewees saw knowledge mobilisation as a key component of the organisation’s work. Although we had selected agencies for interview on the basis that they were major players in knowledge mobilisation (in terms of either scale or the degree of innovation), there were significant variations between agencies. Some funding agencies had at their core the need to encourage the implementation of the research that they funded: ‘the idea that research should be useful is very, very core to everything that we do’ (19).
One interviewee described how their organisation’s mission was to excel in the creation of new knowledge and its translation in order to bring about a stronger and more effective health-care system and improved health for the population. They emphasised that providing the tools to enable researchers and others to move knowledge into practice was a core part of their work: it was not enough merely to create the new knowledge. This affected every stage of the processes they followed in funding research:
So the philosophy being if you’re really going to do this you need to bring end-users and the whole gamut of stakeholders into the discussion even before the application for something goes in. And so we have a number of kinds of integrated knowledge translation programmes.
8
Another agency had a different explicit goal in mind for which encouraging the use of evidence was one of the key mechanisms:
So, if you like, we’re trying to narrow one gap, the attainment gap between rich and poor, by narrowing another, which is the gap between research and practice.
18
Some producer agencies saw knowledge mobilisation activities as an integral – and funded – part of every research project that they conducted, while for other producer agencies, knowledge mobilisation activities were part of some but not all research projects carried out.
Unsurprisingly, given that the agencies had been selected for interview on the basis that they were regarded as being particularly innovative in knowledge mobilisation or as involved in knowledge mobilisation on a large scale and that this had been flagged up as one of the topics that the interviews would cover, most interviewees were able to describe the organisation’s overall approach to knowledge mobilisation. However, some interviewees struggled to define the activities that stemmed from this role:
. . . we are very clear that our purpose in commissioning the research we do and making awards to support research is to try to influence what happens in practice. However, I think we haven’t really found a way to crack that yet.
41
We asked interviewees to describe how they saw their organisation’s role in relation to knowledge mobilisation. These descriptions could be clustered into three broad categories: roles that emphasised products (describing ‘push’ activities and in some cases activities aimed at encouraging ‘pull’);10,53 roles that emphasised brokering; and roles that emphasised fostering implementation. It is important to emphasise that these role categories did overlap within individual organisations: in many cases organisations were doing a combination of activities.
Emphasising products
Some agencies had a specific role in relation to the collation and synthesis of research evidence and its subsequent dissemination. Others described how they saw producing products to convey messages from the research that they funded or conducted as just one of their roles. Several interviewees emphasised the importance of moving beyond pure academic outputs and presenting material in different forms depending on the target audiences. Even agencies that conceived of themselves as brokering agencies sometimes felt that they defaulted to pushing products: ‘we all talk about we should be concerned about pull rather than push, but . . . when we looked at what we did most of the time it was still a pretty high push component’ (7).
In developing products, agencies often worked in partnership with others. Several agencies worked with managers and practitioners to co-produce products that drew on the research literature and on practical experience and that used accessible language. In addition to using a range of products to convey research findings, many organisations used a range of formats (e.g. casebooks, toolkits, online material) to share success stories around implementing evidence-based practice.
Some organisations were using more traditional methods (e.g. producing guidelines and reports) while others had begun to use different approaches (e.g. TV for practitioners, videos, animations) to convey messages in more creative and accessible ways. Two agencies described how they had set up ‘artist in residence’ posts to encourage the use of a wider range of approaches to presenting ideas and information from research. One agency had used young people as actors to present the words and experiences of participants in a research project involving young people. Many agencies provided online resources and some had taken steps to make these resources more interactive and responsive to the specific questions brought by the user. One organisation used route maps (similar to public transport maps) as a way to set out research findings and to encourage practitioners to undertake ‘journeys’ tailored to their particular questions and starting point:
In one case, they kind of taped the route map out on the hall floor and everyone went and stood on a station they were interested in, just to get to grips with everything that’s there and being offered to them, and to think about how they set up enquiries . . . Some of the tube lines have got more complex. A station might have three levels on it, you know, a very complex theoretical piece . . . a kind of bigger picture with the underpinning rationale for a continuing professional development leader, and a getting started place for a class teacher or whoever.
17
For some agencies the emphasis of their work was to provide decision-makers with objective research analysis and synthesis and the main activity was to conduct such analysis and to present it in a form which was useful to the target audience, recognising the constraints of that particular context:
I think that what we do very well is we review the existing academic literature, interpret that in a policy context, and . . . do that in a way where we’re very transparent about what the gaps are in the literature.
3
The nature of the synthesis is driven by their views and that’s a very important issue for us. We don’t believe that any synthesis is necessarily useful for everyone.
7
We know that these folks are dealing with institutional constraints, interest group pressure and so on, and all we’re trying to do is try to make it easier to have research evidence be one input into their decision-making, but not the first and foremost one necessarily.
36
Agencies achieved this tailoring by various approaches on a spectrum from detailed discussions to refine the research question with the organisation commissioning the analysis to holding more general receptions to share the agency’s work with senior policy-makers. A strong theme that emerged from many agencies was that of enabling potential users (e.g. policy-makers, managers or commissioners) to understand more fully the nature of the questions they were asking for which research evidence might be relevant and the nature of the available research evidence and its limitations:
I think, the advantage of our model is that people in my role develop a much closer relationship to the commissioners within the local authority . . . usually they sit down and say, ‘We don’t know where to start really, what do you think?’ And then we can talk it through . . . helping them understand both the potential but also the limitations of the knowledge base.
12
You’re helping people by holding up a mirror and helping them really to answer their own questions, to an extent . . . sometimes what they want is synthesis or particular bits of research but, in our experience, mostly what they’re wanting is not what they initially thought.
7
This, in some cases, meant that the ‘product’ became less important than that earlier process of discussion:
I know from conversations with people in the [government department] that oftentimes they actually find an output . . . no longer that relevant. What they found really helpful is it helped them think through stuff.
3
We see the conceptual uses as being particularly powerful at that front-end about what are we actually talking about. Is this a supply problem, is it a distribution problem, is it a physician payment problem, is it something else? Because once you have that agreed, a much narrower set of options then appear relevant and a much narrower set of things need to be searched for in terms of evidence . . . In the area of health systems it’s not that often that the answer is so definitive that it lends itself to an instrumental use.
36
Some agencies did most of their work in response to external commissions for research and analysis; others had an underlying ‘mission’ around research impact that dictated the activities that the organisation undertook:
The other question, the one that really impressed me most when I came to the [name of organisation] is that the primary consideration is what kind of difference will this project make? And that’s terrific to be able to think about, that’s the right thing to think about, and that is definitely the primary question that’s asked.
9
One interviewee commented on how this was very different from their experience of working in an academic setting:
When we start out on any projects or have ideas, we’re constantly thinking about the impact this is going to have and for who . . . Doing something just because we find it interesting isn’t . . . a sufficient reason for doing it. There always has to be a, you know, who’s this going to impact on, who’s the audience for this.
39
Several agencies that conducted research emphasised the importance of their being seen to be non-partisan [‘we try to be straight-shooters on an issue where there are not a lot of straight-shooters’ (32)], while other interviewees spoke of some organisations as having a particular perspective and ‘voice’:
Everybody knows what the [name of organisation] thinks, or the [name of organisation], what take they’re going to have on something, and I think this comes back to our ambition. We want to be objective, and so you kind of weigh it a bit. Yes, we could comment, but at what point does it become partisan almost or political?
3
Some agencies that focused mainly on research products had expanded their role over time and were starting to consider how they could get more involved with developing the capacity of those in the field to use evidence. Others saw their role purely in relation to the production of robust evidence or saw their role as being a custodian of a set of public data that only they held and enabling people to use that data to inform policy and practice.
So, while most agencies were involved in producing research-based products in one form or another (and few agencies solely focused on such products), there was a sense that creating and sharing products had an attraction and a momentum that was irresistible. Innovation in the field was present (with new types of products being produced) but balancing this production of artefacts with more interactive dialogical activities was a tension.
Emphasising brokering
We included in our sample agencies that were seen primarily as intermediaries. For these agencies, but also for others, a main focus in knowledge mobilisation was enacting a knowledge brokering function and supporting interactive exchanges around research. The term ‘knowledge broker’ was not always used explicitly within organisations: ‘that is not a term that’s generally used in this organisation, but I think that is a large part of our function’ (35).
While for some organisations the agency itself was the broker, other organisations had employed individuals in knowledge broker and boundary-spanning roles:
We’ve worked at three levels in terms of our partnerships . . . one has been at the commissioning level, one has been at the managerial level, be that executive level or middle managers, and the other has been working with frontline staff . . . What, for me, has been one of the significant things is how we’ve developed people within [name of organisation] who have clear boundary spanning roles that they operationalise very effectively at those different levels, and how those people also act as knowledge brokers.
23
Several agencies put on a range of events to foster dialogue (e.g. seminars, round tables, ‘Cafe Scientifique’ public discussions or lunchtime meetings in government buildings):
We hold mini-dialogues where we might bring together . . . 25 to 50 people who were really interested in this particular issue. These are key interests who actually have the ability to do something differently in provincial or territorial governments or regional health authorities or medical associations or life insurance companies or whatever, depending on the issue . . . [The researcher] would present the findings of their research and then we would start asking some questions about how people could actually use this information in their work.
30
Facilitating and supporting networks of practitioners, managers and researchers was a key way that many agencies performed this brokering role:
That network was conceived and built as a network that would put together the basic scientist with clinical scientists, with health services kind of people, with population health kind of people or social scientists. So it put people together, starting 14 years ago. The first couple of years was like babble [laughs] . . . today I would say that particular network . . . has a huge legacy of having created, within that particular pool of people, an understanding that wouldn’t exist in all areas.
8
Several interviewees emphasised the organisation’s role in providing the opportunity (whether face to face or in online networks) for people who would not normally meet to discuss different perspectives and to consider pluralistic methods and the integration of research evidence alongside practitioner and service user expertise:
This intermediary brokerage around supporting and initiating interactions, basically building relationships between practitioners and researchers, and also between practitioners and other practitioners about their use of research.
2
We see ourselves as trying to inform, to help policy-makers access research findings, and [to help them to] commission . . . research that’s useful to their needs, and we see ourselves helping researchers by knowing what policy agencies are interested in and by forming partnerships and research questions that are of interest to policy-makers.
44
We see our role as the sort of water carriers, sitting between policy-makers and scientists, walking between one and the other, carrying information, changing the way one group thinks in order to be able to speak to the other group.
11
Other approaches that agencies were using to try to break down barriers between ‘researchers’ and ‘users’ included providing teaching for pre-registration education programmes for practitioners; having research users on grant funding panels and giving equal weight to assessments of potential impact and assessments of scientific merit; quarterly research meetings that provided an opportunity for local practitioners to meet and discuss with researchers how the research conducted locally fitted with the problems that local practitioners were facing; and fellowships and secondments to enable practitioners and policy-makers to work with the research organisation. One organisation had created a ‘media centre’ to connect researchers and journalists and a parallel centre to connect researchers and practitioners. Another interviewee described the establishment of a ‘rapid response’ research advisory service. The service was publicised by a video on the organisation’s website and by postcards and other promotional materials to explain what the service offered and how to access it. The service was managed by a researcher who had the skills and experience to be able to interpret the initial enquiry and either respond to it or broker it to other individuals or organisations as appropriate.
Some agencies saw their role as being to provide a space to enable people to ‘take a step back’ and think creatively about what sharing evidence might look like, both in relation to their own context and more broadly. Many organisations emphasised the need for humility and openness about the lack of obvious ‘answers’ in many situations:
So much of this is people skills, you know, sitting down, being prepared to say I don’t know . . . we are not there to tell people what to do, we are there to bring people together and provide them with the structures to think and talk and arrive at their own conclusions about what they want to do.
11
We don’t try to offer people the answers, but we just try to create the framework within which people can think strategically and imaginatively. That’s really hard . . . we deliberately, I think, wanted to be an organisation that spent a lot of time listening and making sense of things with people, and then feeding back some tools. Say, OK, here’s one way of looking at your situation, how does that help you make sense of it?
26
The importance of discussions leading to practical action was emphasised by several organisations. They were keen to bridge gaps in the existing infrastructure and to enable groups to talk to each other as the precursor to collaborative practical projects:
Right from the word go we said we didn’t just want to be a talking shop, we didn’t want to be a discussion group . . . we used what came out of those discussions as a foundation for real collaborative projects where we try and identify gaps in the infrastructure . . . and then pursue those and try to get them off the ground.
2
Advocacy for research-based evidence and its value in decision-making in policy and practice was another component of the brokerage role for some agencies:
We certainly don’t see evidence as the only way decisions are made, and in fact it’s rather a small part in lots of decisions, but we had to make the case of why it’s valued and not just take it for granted. It’s a fairly obvious thing to do, but particularly from the point of view of our target audiences, i.e. decision-makers, who have so many other pressures. So I think we have to continue to make the case and engage with the real world of politics and suboptimal decision-making.
16
One agency described how part of their strategy was to put a narrative or ‘human face’ around the numbers to enable policy-makers to understand and remember the data more easily, and encouraging researchers to engage more fully with the worlds of policy and practice and supporting them to do so was a key role for other agencies:
We do think that we have a role in promoting the importance and the benefit to the wider academic community of being involved with wider society, and that we will help them and provide them with tools, knowledge, resources, people, to be able to understand how they can best transmit the knowledge that they’ve created, but also benefit from wider knowledge in society.
43
I think also to get [researchers] not to think of it as some extra thing that is really kind of challenging but to think of it as sort of integrated within their project. And I think if you asked a straw poll of academics and said ‘Do you know what knowledge exchange is?’ people might get a bit panicked, but then if you said to them, ‘Do you want to influence practice with the research?’, everyone would probably say yes, and then maybe give you lots of ideas [laughs] about what they’re doing. So I think it’s trying to kind of demystify it in some way.
25
The interviews, then, demonstrated the enthusiasm and commitment of many agencies towards bridging and linking across multiple worlds. Moreover, they brought to light a range of innovative strategies for accomplishing this, and recognition of the need to link talk and tools to action.
Emphasising implementation
Some organisations saw their role as supporting evidence-based improvement in frontline service. For example, some health-care agencies had comprehensive implementation programmes working with all of the key stakeholders: bringing together clinical experts in those specialties they had chosen to focus on with experts in knowledge mobilisation and information analysts; determining with health commissioners and providers those areas of care most likely to benefit from changes in practice; amassing the most up-to-date evidence and working with providers to implement agreed changes on a trial basis initially and then rolling them out more widely, dependent on ongoing evaluation showing that the improvements were being achieved and sustained:
. . . hands-on implementation in the real world, very messy work, where we have partnered with our NHS organisations to work with them on their priorities and, in so doing, to bring in a rigorous approach to implementation and knowledge mobilisation that often NHS organisations have not used so much in the past.
23
Key to such work was often a philosophy of co-production:
That was some initial work which we deliberately set out to do to capture some of the innovation that was happening in our NHS partners at the time . . . because it was a very useful way of actually acknowledging to the NHS partners that they had a lot of expertise to share and contribute . . . It wasn’t just [the agency] coming in as the so-called expert.
23
If the research is co-produced, if it is co-owned and co-defined and there is involvement and uptake, as the work proceeds, then you do away with those artificial distinctions [between the producers and the users]. You are just people round a table who bring different skills to the mix. One is not better, one doesn’t produce a product that you then have to try and implement, you’ve actually worked jointly on a problem that is evolving as you go along.
24
Typically, such agencies used a range of approaches from improvement science or quality improvement [e.g. Plan-Do-Study-Act (PDSA) cycles, implementation bundles] and provided ‘capacity building’ training to foster research appreciation and research skills through training events, fellowship programmes and secondments for health and social care professionals:
We then have stuff around skills and confidence for the workforce . . . So that’s really working with people to demystify some of this, to show them that it is actually core to their everyday business.
13
We’re really talking about . . . turning research evidence into usable formats at the point of care . . . but also [about] teaching health professionals how to go about using that information at the point of care in their everyday decision-making.
31
Among the implementation supports provided by one large agency were tools and examples from a range of facilities (so that organisations could find an organisation that was similar to them); combining pieces of evidence in a way that made implementation and spread easier (e.g. implementation ‘bundles’ of the ‘necessary and sufficient’ components); providing practical tools and steps including tailored information for different audiences and generic tools (e.g. budget plan, quality improvement measures) to avoid individual organisations having to devise their own; using well-respected champions (e.g. from professional bodies). This agency also sought as their underlying philosophy to be an ‘ally’ to organisations implementing change:
. . . so that the facilities feel that we’re on their side, and this is about helping you to help your patients and to make your life easier . . . it is a friendly ‘let’s do this together because we want to’; kind of the intrinsic motivation as opposed to extrinsic motivation.
30
Another agency employed ‘implementation fellows’ embedded in bridge-building roles in hospitals, with initial funding, education and training provided by the agency and subsequent funding provided by the hospitals themselves. They were now investigating the possibility of these individuals acting as regional hubs and providing more of an outreach role beyond their primary organisations.
One agency described how they saw their role as enabling organisations to take the next step towards implementation: providing them with practical suggestions applicable at their level:
I think often we see that people are too theoretical . . . or they’re too dogmatic about a particular approach, and people go, ‘well that’s not going to work here’, and then quickly just push it off to the side.
30
. . . we get into the [basics] of how could you actually start doing this . . . if you wanted to get started on this tomorrow you could just take this one document and start working through it.
30
Several interviewees emphasised their organisation’s role in helping to bring implementation science to life: providing consultancy services and tools to help service providers re-examine their services and providing them with the tools for data collection and analysis. The aim was to use implementation science and turn it into practical tools for research and analysis. They recognised that, although theories of change were important, what services often needed was practical models to help them to think about their own circumstances:
It’s often review of existing services, trying to re-engage what they’re doing with the evidence . . . Services that have been up and running for a long time have often drifted quite a long way from initial conception and, they have a particular kind of implementation reality which they have to respond to. So we’ve been trying to develop . . . services which help them review that and then take the information from that kind of review back into their practice.
21
There were several ways in which agencies sought to embed implementation more fully and to ensure that their activities around implementation had a stronger and more sustainable foundation beyond the individual projects. One way was by adding to the wider knowledge base:
I’m very committed to being part of a scientific community and making sure that the work we do is contributing to theory building and the collective gradual accretion of empirical evidence about what works in implementation and what doesn’t . . . there’s a very active kind of feedback loop between what we’re doing out there in the field, working alongside practitioners with our sleeves rolled up, and policy-makers too, and then feeding that back into how we’re thinking about the theory.
21
This required engaging with research funders to try to persuade them of the value of more robust and insightful implementation research:
. . . gradually persuading funders to start thinking of things as an implementation as well as an impact evaluation. And then increasingly we’ve started to see funders commissioning just implementation work.
21
A second way to embed implementation more fully was to extend the geographical spread of promising practices. Several organisations used a ‘train the trainer’ model so that research capacity building programmes spread more widely. Other agencies had an explicit regional or national remit:
One of the things that we aim to do as a national organisation is to spread the outcomes of work that’s making a difference across the country, share it with others . . . once you’ve evaluated, [and found that] this project has really made a difference, then we do webinars [and workshops] on it to share that across the country.
30
I think what we’ve learned, particularly in social care, is that essentially an organisation like ours is primarily about influence . . . we might influence using certain levers but we’re basically about influence. So our strategies have been about . . . increasing our reach into the sector, looking at ways in which we can become better known, and then actually have people use our products.
14
In sum, some agencies that funded research had always had a mandate for implementation; for others a focus on implementation was a more recent development. Agencies used a range of methods to encourage researchers to consider the potential for implementation either at the initial funding bid stage or with additional funding or recognition for knowledge mobilisation activities at the end of the project. Conversely, some agencies that funded service innovation projects sought to ensure that these were underpinned by research by requiring those bidding for implementation funds to demonstrate as part of the bidding process the evidence base for their proposals. A concern with sustainability and spread was evidenced in many of the interviews.
Target audiences
Inevitably, the target audiences of the agencies we interviewed were closely linked to their overall role in relation to knowledge mobilisation and to the structures of their sector. Most commonly for agencies the main target audiences were policy-makers and/or those working to deliver services (e.g. professionals, managers or commissioners). Many agencies had a broad range of groups as their target audience:
Our main stakeholder groups for all of our network activities include policy-makers and government and other places, system planners, researchers, clinicians and service providers, people with lived experience, and family members.
28
For others it was broader still, ‘from bench to bedside and back’, including groups such as basic scientists and even those in other sectors (e.g. agriculture).
Some interviewees commented on how important it was to keep the target audience in mind:
It is very easy, in my experience, for producers of knowledge products to become very caught up in the methods of production and to forget about, or not pay sufficient attention to, who it’s actually for. So we have tried to get much, much better at understanding our audience.
14
For each initiative I think we try to get clear on who we’re really serving.
30
Target audiences were not fixed but could evolve over time; some agencies found that the work they initially did with one group was increasingly used by other groups. Conversely, one social care agency found that the voluntary sector organisations with which the agency initially worked struggled to identify their knowledge gaps. The agency therefore switched to working additionally with local authorities as partners and found that these organisations had a much clearer sense of their own knowledge gaps and could work more effectively with the agency to improve practice within the sector.
It was not uncommon for agencies to start with a defined audience (e.g. researchers and government) and then to broaden this out to other groups:
Now we’re talking to professionals, we’re talking to people who are actually doing the implementation and also other financers, I mean like insurance companies, that want good-quality care.
27
Another agency described how national legislation resulting in major changes to the structure of the health-care system had opened up a range of new local policy-makers and commissioners and of community bodies as part of their audience. Other changes in the UK health and social care sectors that were aimed at bringing health and social care more closely together had impacted on the work of some agencies, who found that they now had larger and more diverse audiences with which they were less familiar:
The integration of health and social care is going to broaden our audience significantly, which is quite a challenge, because then it’s about building new networks and connections, which is where a lot of the energy goes in terms of both promoting our organisation and its work and sharing knowledge and information.
35
Interviewees commented on the need to tailor approaches and products for the different audiences (e.g. service commissioners or frontline practitioners). However, at times, there was some tension between allocating resources and attention to the various target audiences:
We’re trying to do this work with schools and we think that that’s wonderful, but you know, policy-makers have money and some leverage [laughs] and we would like to influence them very much.
7
There were also constraints arising from lack of resources that prevented agencies from differentiating between different sectors of their target audience (e.g. between different health professional groups):
We have thought a long time about segmenting them, there’s a lovely long set of documents about how we describe them to define the audiences [laughs] but that’s largely for market research, because when it comes to producing the guidance we don’t have the resources to tailor the products for those different professional groups.
40
The extent to which research producers worked alongside potential users varied; some agencies did a lot of partnership work in carrying out the research:
The other thing that’s really core to the way we work is we do a lot of partnership work, so we do a lot of close work with external organisations. So all of our research projects would have non-academic partners of some sort on them, either as partners and researchers, or in an advisory capacity . . . we’re trying to embed relevance in the project from the start, and we’re trying to open up communication channels with potential users from the start.
19
Another agency described how involving potential users from the outset made sense not only in terms of increasing the relevance of the research but also in terms of increasing the likelihood that it would be disseminated and used: ‘You’re building a kind of sea of champions’ (33).
Identifying and understanding audiences was a core component of many agencies’ work. However, tensions were discernible as agencies expanded the view of who were relevant audiences and as they tried to cater for the diverse needs of these audiences. Knowing to whom you were trying to transmit messages, or whom you were attempting to broker between, often brought in train a clearer understanding of the challenges rather than simplifying the actions required.
Involving service users and members of the public
We asked interviewees whether the organisation involved service users (e.g. patients, carers, parents, pupils or clients) or members of the public in their knowledge mobilisation activities. Some organisations had a clearly articulated approach to involving service users and/or the public in their wider activities, but only rarely did this involve specific roles for these groups in knowledge mobilisation. For example, some health research funding agencies had a requirement that the patient perspective be included in all research proposals and they had both patient panels and patient representatives on the commissioning bodies that made decisions about research funding. Another funder described how they held around 200 ‘Cafe Scientifique’ public discussion events every year. Other interviewees stated explicitly that their target audiences were groups other than the public or service users (e.g. policy-makers or practitioners):
I’m not sure we have the resources to access that community [the general public] and I’m not sure that’s the best way of achieving impact either somehow.
3
One interviewee commented that the thinking about including service users and members of the public within knowledge mobilisation activities was at an early stage in the sector:
In terms of knowledge mobilisation I don’t think people are necessarily thinking about it from that perspective, about patients or the public being an agent for that, but I think they probably could be.
35
One interviewee from a health research organisation commented that, although their partner organisations were increasingly familiar with involving patients and members of the public in research projects, it was less common to involve them in service improvement projects and so this had been a challenge, not least in identifying groups from which to draw participants. The interviewee also felt that it was important to add to the currently limited evidence base on involving patients and members of the public in implementation work and so they had conducted a literature review and written up some case studies to help other NHS organisations to learn from their work.
Another interviewee from a health research organisation described how she believed strongly that it was important to involve patients in knowledge mobilisation:
. . . there’s a quote in [Crossing the Quality Chasm] about ‘the transfer of knowledge is care’, and I feel there’s something really profound and important about that, that when a practitioner service is caring for a patient it’s not just the tasks and activities and processes of care, it’s how that knowledge that they both have is actually translated into making things better for the person. So I’ve always wanted very much to make stronger links between the resources of knowledge that are designed for patients and the public, and processes of knowledge that are designed for practitioners and professionals.
38
For some the inclusion of service users was less a matter of the ethos of the service and was more driven by pragmatic concerns:
I think the voice that actually helps bring all of the other voices together is the patient. If a basic scientist can say it clearly to a patient then they can say it clearly to a policy-maker and vice versa, and without, I think, having that voice at times, we tend to be too stuck in our own disciplines.
8
One agency commented that, although the inclusion of service users in determining the organisation’s activities was not new, what was new was that service users were now being asked to consider broader service and system issues:
So it really stretches people, and it can be positive or negative because they’re being asked for advice where they have to interpret their lived experience to provide a response, rather than just explain what their experience was.
28
For one health research organisation, their approach to providing patients with information had changed over time: initially they funnelled published information for patients through health professionals, but more recently they had begun to involve patients and their families directly in all aspects of their work, from protocol development to drafting evidence summaries. Another agency described how this side of their work had recently expanded because of the skills of a particular colleague and that it was increasingly becoming a core component:
. . . for us it’s become essential, because I see that the inputs coming out of that are profoundly complementary to what we’re getting out of the stakeholder dialogue . . . it’s almost one of those cases of I can’t believe we thought we could get away without doing this.
36
Another interviewee also commented on how valuable it was to have infrastructure to support activities with service users and members of the public; this agency had a member of staff who had experience working with this client group and who was able to help to equip staff with the necessary skills and to support the user group.
The need to avoid tokenistic involvement was a strong theme of many comments on this issue. One interviewee commented that engaging the public was a stated objective of the organisation but that in practice this was perceived to take second place to other work:
I think, bearing in mind how some of my colleagues have spoken about it, that they think it’s a poor relation to engagement with particularly the business sector.
43
The balance between producing knowledge products for the public and for professionals was raised by an interviewee from another organisation with a dual focus:
If you talk to different groups there is a differing set of opinions about what’s the balance of our focus on a public citizen audience versus a professional audience. We do in fact have a number of key areas of work where we produce pieces that are for both audiences.
42
Interviewees commented that one of the main benefits of having service users or patients so closely involved in the research was that it gave the research greater credibility and impact. Equally, they felt that it was important that both the researchers and service users benefited from their involvement.
In summary, involving service users or members of the public in knowledge mobilisation activities was not a primary focus for the majority of the organisations we interviewed. Although for some organisations (particularly in social care and in mental health services) this was routine, other organisations, while sympathetic to the principle, struggled to develop such activities in the face of competing priorities and in the absence of clear guidance in the literature or individuals with specific skills and experience in this area. Many interviewees were concerned that any such involvement should be meaningful rather than tokenistic and that it should provide benefit to the service users as well as to the research.
Drawing conclusions on agency roles
We commented at the opening of this section that these three role categories (emphasising products, brokering and implementation) were broad and not exclusive: in many cases, agencies were doing a combination of activities that did not fit neatly into one of these categories. There were also evident tensions in the agencies as they sought to balance their activities. In particular, the pull back towards the creation and promulgation of concrete products was palpable, and contrasted with the open-ended uncertainty of pursuing interactional activities and implementation support. What began to emerge as the data analysis proceeded was that agencies exhibited patterns or ‘bundles’ of practice around knowledge mobilisation, and we develop this analysis further in Chapter 6.
Looking across the agencies, we can see that there was differential engagement with different domains of the conceptual map (see Chapter 3). For some agencies their main emphasis was on knowledge products, suggesting a preoccupation with collating and synthesising particular forms of knowledge, while also being alert to the potential users of that knowledge. Other agencies emphasised brokering, suggesting a greater degree of engagement with the issues raised under the ‘connections and configurations’ domain of the conceptual map: what networks, roles or processes were most conducive to bringing diverse groups together around sharing research knowledge? For other agencies, their main emphasis was on implementation, suggesting engagement with particular areas of the ‘purpose and use’ domain and also close attention to the domains around ‘actions and resources’ and ‘people, roles and positions’. Context was an important consideration for all of the agencies, no matter whether their emphasis was on products, brokering or implementation.
Having discovered the rich range of knowledge mobilisation practices in agencies, we now turn to trying to understand how these portfolios of practices have come about.
Developing knowledge mobilisation in agencies
The interviews explored how agencies had developed their knowledge mobilisation activities and examined the relative influence of different factors. We also talked to interviewees about their views on innovation in knowledge mobilisation and asked them to what extent the academic literature had influenced the development of activities in their organisation.
In this section, we first explore what interviewees said about innovation in knowledge mobilisation. Next, we review the extent to which models, theories and frameworks from the knowledge mobilisation literature had influenced the organisation’s choice of knowledge mobilisation approaches. After this, we explore in turn what emerged as the other main drivers behind the knowledge mobilisation approaches: taking a strategic approach; the influence of funders and other powerful stakeholders; the role played by changing conditions in the sector; and the role played by earlier experience or evaluations. This final point acts as a bridge to the concluding part of this chapter, which examines in depth how agencies are using evaluation to shape their practices.
Innovation in knowledge mobilisation
We asked interviewees to highlight what they saw as the innovative knowledge mobilisation activities in their organisation. This question prompted reflections from some interviewees as to the very nature of innovation in knowledge mobilisation. One interviewee commented that the question raised issues about the state of the field: as there was a lack of established practice in knowledge mobilisation, almost any activity could be deemed to be innovative. Another interviewee commented that ‘innovation’ had become ‘something of a bandwagon’, and that given the broad spread of activity in the knowledge mobilisation field (with some agencies taking a more proactive stance than others) there was a tendency for some organisations to claim as innovative practices which would be regarded by another organisation in the field as established and mainstream: ‘I’m always a bit wary of claiming to be innovative or at the cutting edge, although we do seek to be’ (13).
Someone gave this whole talk about innovation and then talked about putting on an exhibition in the city centre and I was like, hmm, haven’t we been doing that for quite a long time? Is that innovative?
19
Some commented that they perceived that what they were doing was not particularly innovative for the sector or for the knowledge mobilisation field but that it was new for their organisation. One agency commented that they saw as innovative the fact that they were now seeing research as a whole process and wanting to involve different parties in it. Among the measures funding agencies were taking to make research more useful was working with researchers who submitted unsuccessful funding bids to refine their RQs and designs rather than just rejecting the application. They were also broadening the range of research designs that they funded and were encouraging the publication of a wider range of studies, including more observational studies and so-called ‘failed’ studies:
You’re not going to be able to publish, you know, nice observational studies in top journals. And that’s something that we can influence also, as a financer. Also things like paying for publication of research which has failed [to find things] because people can learn a lot from that.
27
This agency was also focusing on de-implementation as part of the process of implementation: ‘OK, if this is new and cost-effective, what do we need to stop?’ (27).
Sometimes interviewees perceived that although the approach itself was not new, they had been able to go further with it than other organisations. For example, one agency commented that they saw their online network as innovative because it was working well, whereas their earlier experience of online networks was that they often struggled to gain initial momentum and to sustain involvement in the medium term. They considered that one of the success factors in their case was that the online network built on an existing ‘people-driven’ network.
Many interviewees commented that they did not think that their organisation was particularly innovative. They recognised that they were still using traditional approaches and needed to encourage more engagement with research knowledge:
So at the minute we kind of just put stuff on our website and, and hope people see it and use it. I think we’re looking to kind of draw attention to that a bit more.
6
The way we work is pretty much still research outputs modified to suit users, and essentially presented to users with a chance of some interaction, but I wouldn’t describe it as more complicated than that.
48
In discussing innovation and knowledge mobilisation, the use of social media was raised fairly frequently by interviewees. Some interviewees were enthusiastic about its potential to facilitate significantly greater spread of research messages (e.g. by alerting potential audiences to research findings):
Of course it’s limiting in terms of getting sophisticated messages across, but you can refer to the full-blown journal articles in your tweet and just have this incredible scale of potential audience, and very quickly. So yeah, I think it’s not going to be the answer in isolation at all, of course not, but it’s definitely a significant part of the mix.
16
Another interviewee described how they were using social media not only to promote research products but also to encourage interaction and feedback on those products:
So instead of just saying, here’s this great document, you know, send it out and use it, it would be, you know, here’s this fantastic document, come speak about it on our collaborative spaces and let us know how you’re using it so we can try to use it as evaluation as well.
33
Others commented that there were challenges in using social media because it was a relatively new mechanism and one about which some researchers and some organisations had concerns.
Overall, although we had purposefully selected agencies on the basis that they might be innovative, we did not find that many agencies were comfortable with this term and we found much ambivalence about whether or not innovation was in fact being achieved. Many respondents played down their organisation’s achievements in knowledge mobilisation and were reluctant to claim the label ‘innovative’.
The influence of models, theories and frameworks
Interviewees were asked about what lay behind their organisation’s approach to knowledge mobilisation, and specifically whether or not and how any specific models, theories or frameworks from the literature informed the approach. The extent to which the knowledge mobilisation literature had influenced organisations in developing their knowledge mobilisation activities was very variable. Responses lay on a spectrum from organisations whose approach was not underpinned by any obvious literature, through to those who were drawing in explicit and detailed ways on published models (Box 4 lists the main sources cited).
-
The PARIHS Framework (Kitson et al. 199852).
-
The KTA Cycle (Graham et al. 20061).
-
Push, pull, linkage and exchange (Lavis et al. 2006;10 Lomas 200053)
-
The three generations framework (Best et al. 200863)
-
The IHI Model for Improvement (Langley et al. 199649).
-
PDSA cycles (Kilo 199850).
-
The Greenhalgh model for considering the diffusion of innovations in health service organisations (Greenhalgh et al. 200431).
-
Walter et al. ’s three models of research use (Walter et al. 200460).
-
The Consolidated Framework for Implementation Research (Damschroder et al. 200965).
-
Fixsen’s Stages of Implementation Model (Fixsen et al. 200533).
-
The Levin model of research knowledge mobilisation (Levin 200459).
-
School Improvement Model (EEF71).
-
TRiP-LaB’s Develop, IMplement, Evaluate (DIME) approach. 167
-
After Action Review (Bray et al. 2013168).
-
Ovretveit’s three approaches to widespread change (Ovretveit et al. 2011169).
-
Dimensions of school leadership (Robinson 2011170).
-
Joanna Briggs Institute model of evidence-based health care (Pearson et al. 2005171).
-
Buxton and Hanney’s payback framework (Buxton and Hanney 1996172).
-
QUERI Six Step process. 173
-
RE-AIM framework (Best and Holmes 201020).
-
Statistical process control.
-
Stakeholder mapping.
-
Contribution analysis.
-
Process mapping.
-
Implementation science.
-
Systems theory.
-
Knowledge ecosystems.
-
Ecological systems theory.
-
Network theory.
-
Cognitive–behavioural theories.
-
Organisational change theories.
IHI, Institute for Healthcare Improvement; QUERI, Quality Enhancement Research Initiative; TRiP-LaB, Translating Research into Practice in Leeds and Bradford.
Note
Many organisations had developed their own models and used these instead of or in addition to those listed above. Others adapted multiple models.
Of those organisations which were drawing on the literature, the degree to which they were using it varied from the explicit use of one or more models from the literature to a more general drawing on theories or approaches set out in the literature (e.g. basing the organisation’s approaches on the concept of ‘linkage and exchange’, or on relational approaches to knowledge mobilisation, or loosely drawing on systems theory). Where interviewees named particular authors (rather than models) as having influenced their thinking, the same few names were mentioned (e.g. Weiss, Lavis, Lomas, Best, Davies and Nutley).
Some organisations positioned themselves very explicitly in relation to the literature and had a clear rationale for this choice:
We’ve looked at what seems to make sense to us, but also importantly what makes sense to our NHS partners . . . this was something that they could understand and identify with. So that’s been a key issue in terms of the different frameworks that we’ve selected.
23
One agency had commissioned its own expert review to inform the development of their knowledge mobilisation strategy; the interviewee described how this theoretical underpinning had been helpful in strengthening the agency’s conviction about the approaches that they were developing and had increased the organisational support for that.
Another agency had, in consultation with experts in knowledge exchange and with policy and practice audiences, developed their own model to guide their work:
We needed to have a model that spoke to a broader range of knowledge and who is engaged in driving the process, not solely from the perspective of developing a research project . . .
42
Another organisation described how they had been strongly influenced by the work of Lomas and by Best’s three generations framework. 63 They wanted to challenge the dominant model of knowledge exchange in their sector (public health) and to bring a more relational approach to sharing knowledge:
The theoretical model that dominated everything was that medical model, that linear view of knowledge implementation, you know, knowledge, you produce it, you publish it, the message speaks for itself, people take it up, and if they don’t they’re stupid or lazy or wilful. That kind of very clear message. And, so there were a number of us . . . (who had) those early discussions about how can we run counter to this tide?
24
They emphasised that they were not following the models slavishly and suggested that a cooking metaphor was a good depiction of their approach:
We’ve got in our head how to cook translational cake or how to bake in this style, and we pull it in but not necessarily as a recipe, it is just an underpinning kind of set of skills and orientation.
24
Several organisations described an eclectic approach, drawing on more than one model, or on a range of ideas and approaches: ‘we were quite explicit that we would draw on several models because we didn’t feel that any one model was adequate.’ (1)
We’re very open-minded. But I’m sure some clever philosopher of science could say, well you’re actually very narrow [laughter], but there isn’t one particular approach I think that we’d say would be dominating.
16
Some agencies did not rely on the knowledge mobilisation literature but drew on related literatures instead or in addition. The quality improvement or implementation science fields were used by some agencies as sources of methods and tools for their implementation work (e.g. small cycle learning, statistical process control). Some interviewees had a disciplinary background other than knowledge mobilisation (e.g. organisational theory, cognitive–behavioural science, political science) that informed their work and whose literature they tended to draw on instead:
I guess maybe because my PhD was in political science and I’ve had so much exposure to that . . . it just kind of pervades all of my thinking. You know, I’m constantly thinking about policy legacies or path dependency and I’m thinking about different types of institutional constraints.
36
Other agencies developed initiatives first and then drew on the literature at a later stage. One interviewee described how the organisation’s work had initially developed organically over time and how he had only later drawn on concepts around communities of practice to inform further development.
Some organisations did not look to the knowledge mobilisation literature for inspiration about the work of the agency but instead they tended to find ways to learn from those who were already working successfully in knowledge mobilisation in a range of fields:
It’s more in the people who have been working really productively and in particularly sort of focused spaces, where the real richness is.
26
Another agency described how they drew less on the literature and more on an ‘experience-based knowledge base’: they brought together people with experience in particular areas (e.g. collaboration, scaling up) to learn from their experience and then produced guides to help share this learning with others:
We ask them, OK, what did you do and what would you do again if you had to do it all over again, and what would you advise someone else who has to start doing that kind of thing now?
27
This interview question prompted many comments on the existing literature and on the adequacy of the current models; several interviewees expressed regret that the existing models and frameworks were overly complex, hard to operationalise or limited in scope:
Some of it is quite difficult, the language that’s used, sometimes it’s quite hard to put all of these disparate groups’ thinking together in a cohesive way, and say well what does that actually mean in practice?
35
I think theory is incredibly important and making conceptual distinctions is very important, but I rather felt that a lot of the complicated models with arrows going in every direction haven’t been particularly helpful. So we’ve tried to kind of always go back to basics and say what are the really crude basic building blocks here and what are we looking at.
7
One interviewee suggested that a further problem with models was that basing knowledge mobilisation activities on any one model was likely to prove restrictive, given the likely changes in contextual circumstances, policies and ideas in the sector. Another interviewee went further and expressed a personal preference for ‘getting things done’:
I have a bias here, I just want to get things done and I have very limited patience for elaborate theoretical frameworks. That’s just my own particular bias and it reflects my background in policy and in administration as opposed to academia perhaps.
15
Others commented that the evidence base was largely around facilitating change at a local level and that there was very little in the literature that could help national agencies to determine their approach. There were also comments that the challenges of knowledge mobilisation in the health, social care and education sectors made it hard to apply simple approaches from other fields:
Now that’s not to say that we can’t learn from how people do their marketing strategy and get people to buy stuff, but it’s just that it’s slightly more complicated than that, I think.
6
Those interviewees who said that their organisations were not using the literature explicitly typically said that knowledge mobilisation theories and frameworks were not in the forefront of individuals’ minds on a day-to-day basis, although specific theories of change might be considered in relation to specific projects. We observed some gaps between ‘espoused theory’ and ‘theory in use’. 174 For example, in some cases models and tools that were fairly prominent on an agency’s website were not referred to by the interviewee, suggesting that these were not at the forefront of the work of that individual or organisation. Conversely, some organisations were implicitly using theories but were not doing so explicitly. For example, one interviewee commented: ‘I think we’re shamelessly atheoretical [laughter]. It’s a very pragmatic environment’ (46); yet another interviewee from the same organisation described how their knowledge mobilisation approaches were designed to use peer pressure between organisations to encourage evidence use: an approach that implicitly drew on existing theories of organisational change.
In summary, some organisations were drawing on models, theories and frameworks from the knowledge mobilisation and/or other literatures (in some cases following an extensive literature review and consultation with experts in the field). Others had developed their own models and tools either from scratch or as adaptations of published work. Use of several models in order to combine the most appropriate features of each was common. No one model or set of models appeared to dominate. We observed a fair degree of frustration with the limitations of the existing models, theories and frameworks, which many interviewees perceived as overly complex and hard to operationalise. It was, therefore, not surprising to find that some organisations had developed their knowledge mobilisation approaches somewhat independently of the published literature, taking a more pragmatic approach.
Strategic and pragmatic considerations
More strategic and pragmatic considerations (rather than theoretical thinking) underpinned much of the knowledge mobilisation activities discussed in the interviews.
We come together and we say, all right, let’s look at the data, what is it going to take, what do we know, and then let’s talk about what we’re hearing in the field, what are people struggling with, and then design a strategy to meet that.
30
Early development was often described as fairly pragmatic, with greater clarity and the development of a more focused strategy emerging over time:
As the projects have developed it’s become easier because we know we’re trying to achieve something that’s . . . become . . . clearly defined as we’ve gone along.
2
Realistic assessment of the potential for change was also a driver. For one agency, being strategic meant choosing to do research only on policy areas that could feasibly be implemented in a reasonable time frame, and not focusing on those that were too far outside the prevailing discourse and policy direction.
However, organisations also commented that a focused and strategic approach did not rule out the need for adaptability:
That’s been the whole narrative of our evolution as a group, that we keep thinking that, oh well once we add this programme, we’ll be sorted out, you know, once we have [name of programme], surely that will help solve the problem, or once we have briefs and dialogues going alongside [name of programme], well that will be enough. And then over time we just realise that there’s so many pieces that need to be in . . . I sometimes call it a healthy eco-system for using evidence in decision-making.
36
Although some organisations described strategic approaches to developing knowledge mobilisation activities, the development of knowledge mobilisation often appeared as a combination of strategy and serendipity (emergent approaches, and ‘muddling through’):
The approaches we take – none of them are as we envisaged, they’ve all morphed in relation to need and contextual changes, and have evolved over time.
24
Thus, for many organisations, the development of their knowledge mobilisation activities arose from a combination of pragmatism and other factors rather than from strategic vision alone.
The influence of funders and other powerful stakeholders
A key driver for many agencies that affected the content or clinical focus of programmes undertaken or the nature of the research that would be shared included political pressure from government or national bodies, changes in external reporting requirements for stakeholders (e.g. around readmissions or medication use) or research evidence ‘horizon scanning’ by leaders within the organisation. The remit that their funder or governing body gave them was also a major driver for many agencies, particularly in the case of agencies funded by governments. They recognised that their work was often driven by high profile policy initiatives and that this gave them less control over their overall direction than some other organisations (e.g. universities):
There is also a very clear agenda from government . . . research funders have got to get into the space of knowledge mobilisation.
43
I would say that there has been a pressure or a push towards prioritising projects that typically go into decisions for money, or to important policy decisions.
45
Key individuals too had often played a strong role in shaping an organisation’s approach to knowledge mobilisation. For example, one interviewee described the influence of the organisation’s founding director:
He had a vision and he was able to communicate that vision to the funders, but also to other people working in the centre, about why these things were important, and that we shouldn’t just be ploughing on, we should be thinking about how we want our research to have an impact.
25
Another agency had been heavily influenced by the prior experience that key individuals had gained working in other national and local organisations involved in practice change.
In sum, funders (e.g. governments) or other powerful stakeholders outside the organisation or senior figures within it played a key part in influencing the content and nature of knowledge mobilisation activities for many of the agencies we interviewed.
Changing conditions in the sector
Recognition of changing conditions in their sector was another key driver for many organisations. For example, one agency described how they had considered that the types of ‘wicked’ problems that the public sector was facing in a difficult economic context meant that a social innovation approach was more likely to succeed than more traditional approaches:
We need to put our limited resources into things that might work, and test and learn and experiment . . . There’s a hunger to do things differently, to learn from the past or from the present, but learning quickly, so if you fail, fail fast and fail low cost . . . You need to just keep having new stabs at it.
16
One agency had been conceived at a time when funding was relatively generous and it was possible to have a vision of ambitious programmes with a substantial evaluation component built in. By the time the agency came to fruition it had become necessary to focus on what could be done internally and more cheaply. Several interviewees referred to responding to the economic downturn as a factor affecting the development of their knowledge mobilisation activities. One agency noted that, although this was not a prime driver for them, they were aware that economic conditions in the sector meant that there were fewer networking opportunities for practitioners. This meant that the agency’s role in encouraging communication through its own network was becoming increasingly important. Another agency responded to emerging pressures on media organisations: they recognised that the media was an important source of information on health policy and that many media organisations were cutting back because of economic constraints and no longer had the expertise to cover health policy stories adequately. The agency, therefore, set up a news service to ensure broad coverage of health policy issues and their implications.
Changes in the national political environment had also prompted changes in knowledge mobilisation activities: as a result of devolution, organisations in Scotland and Wales had gained new policy audiences for research evidence. Changes in the structure of the sector (e.g. new organisations in the field) led to changes in some organisations’ approaches over time. For example, for some health-care organisations in the UK there was a need to consider the role of the new Academic Health Science Networks and how best to work with them in knowledge mobilisation. Organisations in the social care sector in the UK were increasingly considering how best to link their knowledge mobilisation activities up with health organisations (and vice versa) in response to sector-wide developments aimed at aligning health and social care provision more closely.
Agencies had also been affected by changes in the university environment and in university ‘drivers’. Several interviewees, from a range of countries, commented that ‘impact’ had become a stronger focus for the organisation in the past couple of years:
I think the university’s thinking has changed around what matters. When I first started, I went to a seminar where some esteemed academic said ‘Well if you’re going to get money these days you’ve got to say you’re going to disseminate your research’ [laughter] in a very kind of cynical way which you just would not hear now. So the world has changed so much over that 10 years.
19
One agency noted that engagement outside academia was increasingly being accepted as the norm and that academics were increasingly aware that they needed to do that. This had led to growing interest in what the agency could provide to support academics in fulfilling that expectation.
Thus, changing conditions in their sector (e.g. new challenges arising from economic constraints or political developments; a growing discourse about impact) had prompted many of the organisations we interviewed to alter their knowledge mobilisation approaches in several ways: changing the focus or content of their activities, providing new services or programmes to address emerging deficits, or widening their target audiences to embrace new stakeholders.
Doing things differently because of earlier experience or evaluations
Some organisations made changes in their knowledge mobilisation approaches in the light of experience or evaluations. Several agencies described how in the early days of the organisation they had tried one knowledge mobilisation approach, but they were now doing something different, learning in part from experience. One example of such a shift was given by an agency that described how they were now considering implementation issues right from the start of the research process:
It was like: OK, at the end of research we think about implementation. And now we’re thinking about implementation from the beginning, like what are the knowledge needs of these target groups, what kind of questions do they have, what kind of research can you do to answer these questions then?
27
Interviewees gave examples of how early experience had led to new initiatives or to changes in approach. One organisation described how earlier ‘ad hoc’ initiatives around knowledge translation (e.g. involving the user from the outset) led to the development of a long-term user network to ‘avoid having to start from scratch’ each time. Another agency described how they were using knowledge broker roles differently: they had initially used volunteers as knowledge brokers, before realising that the role would be stronger and more sustainable if they had paid employees in that role. Another agency described how they now had people in these roles as part of a wider team rather than as individuals, and that these individuals had a knowledge brokering function alongside a wider project management role. Others spoke of learning the importance of securing both clinical and managerial support to enable brokering teams to work effectively:
You have to show that this is an organisational priority and that the team going through has the support of their leadership at the highest level to get this work done.
5
Change or growth in the agency itself was another driver for change in the knowledge mobilisation approach. One agency that had initially seen its role as one of an independent body assessing research or practice shifted to a more collaborative approach of looking to support practice; in addition, the agency expanded its remit to include related areas (education, health) affecting its target audience. Another, which had originally envisaged being a ‘light touch’ hub with around 500 contacts, found that the interest attracted was considerably greater and that the number of contacts was three times that envisaged. This meant that there were new challenges in ensuring effective co-ordination and avoiding duplication between parts of the network. Another agency had a sense that the earlier activities that they had carried out around effective co-production and knowledge exchange were now becoming more mainstream and, therefore, they were moving to consider evidence use in practice more closely.
Feedback from stakeholders was also a driver for change in knowledge mobilisation approaches in some organisations. One agency received feedback that health professionals saw the organisation as an ‘ivory tower’ and realised that they needed to be clearer both about the organisation’s role and about how research evidence sat alongside other forms of knowledge including practitioner knowledge and patient preferences. Another organisation was alerted to new implementation challenges facing their health professional audience:
We were beginning to get feedback saying how are we supposed to manage this, what should we do with the other? So it was much more in response to feedback from users and the organisations we were sending the guidance to, rather than government or national bodies.
40
Some agencies drew on formal evaluations to guide the development of their approaches. One agency had had their work reviewed by an international panel; this had resulted in recommendations and additional resources aimed at increasing the agency’s knowledge mobilisation work.
Another interviewee described how the focus of the agency’s work had changed from an emphasis on trying to influence national legislative change to an emphasis on working through local policy-makers and practitioners. The driving force for this had been an evaluation of the effect of a successful publication:
Everybody liked it, it was all wonderful. And then when we did the evaluation of the dissemination we found that actually it didn’t really change anybody’s behaviour . . . It was successful because it was speaking to what people were doing before . . . So I wanted to achieve change and we thought that if we could do that, it would be local.
11
Feasibility research was used by one agency to explore demand and to enable broad scoping of the initiative’s parameters before developing a major programme:
We tested how journalists and the press were currently using evidence to inform their work in educational reporting. We found out that it was quite haphazard and it wasn’t particularly systematic in any way. So there was a place for a brokerage type of service that could help connect research and researchers to the press and the media.
2
We observed that, in general, organisations were more likely to draw on their own experience or evaluations rather than on those of other organisations. When asked if there were other organisations that the agency looked to in developing knowledge mobilisation approaches, several agencies commented that they did not look to other organisations as much as they believed they ought to:
I think we’re probably a bit too insular with probably not a very wide knowledge of what other organisations are doing.
41
However, other agencies did describe learning from the cross-organisational work they did with a local research centre or copying some approaches (e.g. the use of social media) because they had become an organisational norm:
Somebody suggested we use Facebook and Twitter. That’s not based on the research on theories either, it’s what all agencies with self-respect do.
45
Several agencies described being influenced by the work of Canadian organisations (e.g. the Canadian Health Services Research Foundation), which were perceived to have been in the vanguard of knowledge mobilisation:
That’s why we have quite a strong emphasis in our work on knowledge brokering, because that was something that they had quite a bit of debate around at that time.
44
Learning from earlier experience then, or in some cases from more formal evaluations, was one of the factors that helped to shape agencies’ knowledge mobilisation activities. In general, this meant learning from their own work and experiences (of which more shortly): few organisations we interviewed looked for learning in any systematic way to other organisations.
Drawing conclusions on how knowledge mobilisation develops
Looking across the organisations we interviewed, we found that a range of factors had contributed to shaping the knowledge mobilisation approaches in use. Agencies were building on local experience and tacit knowledge; internal evaluations; personal inclinations and capabilities; and the interests of key individuals in the organisation. Methodological developments in the field also played a part in some agencies, although often in an indirect or diffuse kind of way. The influence of funders or other powerful stakeholders and the changing conditions in the sector were also significant drivers. We were interested to note that while many organisations drew on earlier experience or on more formal evaluations in designing and carrying out their knowledge mobilisation activities, few had systematic mechanisms for learning from other organisations. The next section explores in more detail what agencies had learnt through their evaluations and from their practical experience.
Learning from evaluations and formative experience
In this final main section we first consider how organisations talked about evaluating knowledge mobilisation activities and what they said about the challenges of evaluating their strategies and operations. We then describe the substantive evaluations that had been carried out by organisations in our study and consider both the approaches taken and the key findings. The section concludes with an account of the formative learning that interviewees described.
How agencies talked about evaluation
Interviewee organisations fell into two broad categories: those that described a range of evaluation activities and those organisations that were somewhat apologetic in describing what they saw as only limited activity in this area. Common to both groups were an expressed belief in the importance of evaluating knowledge mobilisation activities, a perception that robust evaluation posed considerable challenges and an aspiration to do more evaluation in future.
Many agencies emphasised that they saw evaluation as an important part of their knowledge mobilisation activities and some had sought additional funding to be able to evaluate the knowledge mobilisation side of their work. Some reflected on the importance of evaluation both to inform future work and also to be seen to ‘practise what we preach’:
We do it for ourselves, so (a) everything we ever do we debrief, and then (b) we debrief lines of activity.
17
We’ve set up an evaluative framework that measures the [programme] itself as a vehicle for taking evidence and getting it into action and creating change, and the extent to which the [programme] was or wasn’t in and of itself an enabler or a barrier to getting things done . . . we’ll reflect on that before we do another one.
5
Despite strong recognition of the need for evaluation, not all agencies felt able to rise to the challenge. Some acknowledged that evaluating their own activities was not something that they had yet been able to do, perhaps because of resource constraints, the relative recency of activities in this area or the rapidity of change. Some organisations evaluated only their knowledge products; others were evaluating broader activities but wanted to make their evaluations more detailed or nuanced.
Many agencies relied on informal measures of how well the approach was being received by the service users and other stakeholders or on broad measures like the renewal of contracts by clients. When asked how they knew how well they were doing, one agency responded: ‘Well – [pause] – we don’t really know, is the proper answer to that. We only know in the sense that most of our work is commissioned and we know whether the funder is happy with what we’ve achieved’ (7).
A degree of hesitancy and apology was, therefore, often expressed around the issue of evaluation in knowledge mobilisation:
I think most of my colleagues would agree that this is an under-appreciated and under-studied area [by us], despite the fact that we have a specific programme whose mission is these very issues . . . There’s relatively little formal, proper, rigorous, independent and objective evaluation of the sort that should be conducted.
37
Of course, some of the agencies interviewed were more thoroughly active in evaluation work, for example having a combination of internal and external evaluation built into each project or programme: ‘you must be constantly thinking, always changing’ (4). Often these evaluations used a mixture of formal approaches (e.g. specific evaluations of individual programmes) and more informal approaches (e.g. collecting impact stories, taking informal soundings from knowledge brokers). These evaluations fed into their future strategy and, they reported, often resulted in significant changes to programmes.
Some organisations sought to track whether or not the information they put out was being used by policy-makers, for example in government hearings on a policy issue or in the research and policy papers published by government agencies. One agency described how they evaluated their work through a range of informal and formal, continuous and one-off mechanisms including regular reporting to the government department about impact on policy, and evaluation with users involved in their knowledge broker services:
We kind of do a structured process about every 2 or 3 years, but every review or other kind of broker experience we interview the person 6 months after the review’s all finished, to explore whether it met their needs and what use they’ve made of the product.
44
Through these evaluations they had learnt of the mismatch in perspectives between researchers and policy-makers about the role that research played in driving policy:
We talked mainly to researchers, and essentially researchers felt that they played a big role in agenda setting, and policy-makers really don’t feel that; they feel that agendas are set by the community and by politicians.
44
For some agencies, although evaluation (e.g. providing impact case studies) was mandated by their funder or was applied when programmes had a large amount of resources invested in them, it was also seen as an important mechanism for learning and for planning changes in direction:
Some of it is mandated by [those] that give us our money, but much more concretely we are doing it to really use our own data to drive programme change.
8
So everything that we set up, we think of as both trying to contribute to better decision-making, but also as a laboratory to try different stuff out. For everything, we try and subject it to some kind of an empirical evaluation.
13
Many agencies collected only basic metrics (e.g. website activity, seminar attendance) but interviewees recognised the limitations of such measures:
It’s very easy for us to produce annual reports saying 80% of people who came to our events said they were good or very good, but that’s not going to be really telling you much more than most people would expect really, given the quality of the speakers and such.
12
We already know how many people access our web pages to look at a particular piece of research, for example. What we surely ought to know, which is the killer question, is what do they then do as a result of doing that and is it actually doing anything to their practice?
6
Other organisations had more sophisticated approaches. For example, one agency was using a contribution analysis approach to tease out what their particular contribution was:
What we’re really interested in is that longer-term impact and the contribution which we may make alongside the morass of other activity that’s going on at any one time.
13
Another we interviewed had an evaluation framework that was developed as part of the externally commissioned review that led to their knowledge mobilisation strategy. This aimed to assess different levels of impact: outputs; level of reach; changes in knowledge, skills, attitudes and behaviours; measurable change in practice or decisions; and measurable change in health and social services outcomes.
Some interviewees recognised that there might be advantages in making their informal internal learning more widely available; however, there were concerns around the resources needed to collate and write up this material, about the ethics of publishing material which had been collected from individuals for other purposes and about the risk of ‘overselling’ learning from a small organisation. Several organisations described aspirations to do more evaluation of their knowledge mobilisation activities in future.
The challenges of evaluation
A common theme expressed both by interviewees from organisations that did seek to evaluate their knowledge mobilisation activities and those who struggled to do so was the challenges of evaluation. A central concern and a reason given by many interviewees for the limited amount of evaluation of knowledge mobilisation activities conducted by their organisation was the methodological difficulties of conducting robust evaluations: the challenges of assessing impact; the difficulties in teasing out the contribution of different strands of activity or mechanisms in a busy sector; and the desire to minimise the reporting burden that they placed on busy stakeholders:
Obviously we want to promote evaluation, but it’s just got to be proportionate and we’re just too small to do anything that meaningful.
16
It’s really hard to measure impact . . . Have we had an impact on [specific outcome]? Well I would suspect we could say, very confidently, no, in a kind of linear way. Have we had an impact on the learning culture of our partner agencies? I would say very much so. But how would you evaluate that and measure that?
12
Others commented on the need for flexibility around the measures used to assess progress in knowledge mobilisation activities:
Sometimes the measures change as well, you know, sometimes we think we want to be looking at numbers signed up, or we’ve realised it’s more important if they’re engaged, and it doesn’t matter how many sign up.
30
It was also seen to be important to consider carefully what programmes were aiming to do and to assess the results against those objectives: ‘. . .we were ending up in the position where we would do an evaluation of a programme and you realise you can’t really evaluate it because you don’t know what it was meant to do’ (34).
The challenges of assessing and discerning impact meant that several interviewees described how, despite their personal beliefs or the beliefs espoused by the organisation about the need for broader or more robust measures, it was often simplest to focus on process measures or on simple metrics (e.g. website use, downloads of publications or attendee satisfaction at events):
We’ve always struggled, from the policy side, to understand the impact that we want to have or measuring whether we do have it or not. So I think, by default, we tend to defer to those more sort of process type things.
39
Overall, agencies usually recognised the importance of evaluation but were really struggling to turn that recognition into practical action. With a few key exceptions, the challenges of delivering wide-ranging and convincing evaluations meant that agencies often focused on short-term process measures and subjective – often anecdotal – reports.
Substantive evaluations
Throughout the desk research and throughout the interviews we specifically sought to identify any substantive formal evaluations of knowledge mobilisation activities (either by the agencies themselves or by external bodies). We looked especially for evaluations that had led to written reports that were publicly available or could be made available to the research team. The combination of website review and specific questioning of interviewees, combined with additional e-mail and telephone contact resulted in the collation of 18 published evaluation studies produced by or on behalf of seven of the agencies in our study. Table 5 sets out the details of these studies: the evaluation methods; the criteria and measures of success; the logic model or theory of change; the knowledge mobilisation mechanisms tested; any contextual evaluation; and the main findings. The studies are set out alphabetically by name of organisation as follows:
-
Canadian Institutes of Health Research (CIHR)
-
Centre for Effective Services (CES)
-
Centre for Research on Families and Relationships (CRFR) (four evaluations)
-
Economic and Social Research Council (three evaluations)
-
Michael Smith Foundation for Health Research (MSFHR) (three evaluations)
-
Social Care Institute for Excellence (three evaluations)
-
ZonMw (The Netherlands Organisation for Health Research and Development) (three evaluations).
Item | Details |
---|---|
Canadian Institutes of Health Research | |
Topic/scope of evaluation | Evaluation of CIHR’s KT Funding Programme (2013)175 |
Evaluation methods |
|
Criteria and measures of success | Published study protocol (McLean et al. 2012176) provides a detailed evaluation matrix with indicators specified for each of the evaluation questions |
Logic model/theory of change | A funding programme logic model is included in the study protocol and final report. The final report comments that this can be summarised as Involve-Influence-Act |
Mechanisms of knowledge mobilisation tested | Eight evaluation questions:
|
Contextual evaluation | The environmental scan provided context and evidence surrounding the role of a funding agency in KT processes, the known successes and limitations of various KT funding programmes, as well as KT evaluation. This is said to provide the contextual base for the remainder of the data collection phases |
Main findings |
|
Centre for Effective Services | |
Topic/scope of evaluation | Independent review of CES outcomes and impact 2008–11 (published in 2012)177 |
Evaluation methods |
|
Criteria and measures of success | Review defined indicators for each of the three early outcomes in the logic model:
|
Logic model/theory of change | Revised logic model included in 2012 review report |
Mechanisms of knowledge mobilisation tested | This was not an evaluation of specific mechanisms but an overall evaluation of what CES had achieved in relation to its aims Brief comments are made about the effectiveness of the following activities:
|
Contextual evaluation | Review includes a high-level analysis of the external environment and its implications for CES. This includes comments on:
|
Main findings |
|
Centre for Research on Families and Relationships (CRFR): evaluation 1 | |
Topic/scope of evaluation | Reflections on 10 years of KE at CRFR178 |
Evaluation methods | Personal reflections of a co-director of CRFR |
Criteria and measures of success | Not discussed |
Logic model/theory of change | Not explicitly discussed but see comments in findings column |
Mechanisms of knowledge mobilisation tested | General reflections on:
|
Contextual evaluation | General reflections on the implications of the recession for KE work |
Main findings | CRFR’s distilled approach to KE:
|
CRFR: evaluation 2 | |
Topic/scope of evaluation | Growing up in Scotland (GUS) longitudinal study: impact report179 |
Evaluation methods |
|
Criteria and measures of success | Specification of a ‘results chain’ with corresponding monitoring criteria:
|
Logic model/theory of change | Contribution analysis framework, which incorporates a results chain (see entry above) |
Mechanisms of knowledge mobilisation tested | Dissemination and engagement activities |
Contextual evaluation | Not discussed in report |
Main findings | No specific comments on the relative effectiveness of different dissemination and engagement activities GUS data found to be used in four main ways:
|
CRFR: evaluation 3 | |
Topic/scope of evaluation | About Families Project Report (2013)180 and About Families: What We have Learned about Evidence to Action (2013)181 This was a partnership project to facilitate using evidence in practice. It produced demand-led briefings, action planning resources and supported organisations to use evidence |
Evaluation methods | Mention of an independent evaluation with stakeholders but, apart from discussing the action planning case studies, neither report outlines the evaluation methods Case studies based on reflections from stakeholders |
Criteria and measures of success | Self-reported use of About Families resources and support |
Logic model/theory of change | An evidence-to-action model is presented in the report |
Mechanisms of knowledge mobilisation tested | Working in partnership to:
|
Contextual evaluation | Not discussed or evaluated in any detail |
Main findings | What worked well:
|
CRFR: evaluation 4 | |
Topic/scope of evaluation | Case study of participatory research (Briefing 66 – evaluation led by CRFR, but participatory research project conducted by DeMontfort University)182 |
Evaluation methods | Document review Stakeholder interviews |
Criteria and measures of success | Indicators identified for each element of the impact journey (uptake, use and impact) |
Logic model/theory of change | Research contribution framework – impact process comprises research uptake, use and impact |
Mechanisms of knowledge mobilisation tested | Participatory research methods which included:
|
Contextual evaluation | Some general comments about context. Not clear how evaluated but does say that context had a significant impact on the extent of impact |
Main findings | Project changed:
|
Economic and Social Research Council (ESRC) | |
General comments | The ESRC (through the Evaluation Committee) evaluates the quality and impact of all of its investments. The methods vary according to the size and nature of the investment. Both academic and non-academic impact is assessed. The focus in this table is on the latter. To date, the Evaluation Committee has produced three reports that summarise the ESRC’s work to evaluate the non-academic impact: Taking Stock (2009), Branching Out (2011) and Cultivating Connections (2013). The entry below summarises the main points from all three reports in relation to the effectiveness of different knowledge mobilisation strategies and mechanisms. The evaluation work is oriented to assessing the nature and extent of impact rather than the mechanisms of impact, but the evaluations make some comments on how and why impact is generated (or not) |
Topic/scope of evaluation | Taking Stock (2009);183 Branching Out (2011);184 and Cultivating Connections (2013)185 |
Evaluation methods | Experimenting with different mixed method approaches to impact assessment. Evaluations to date have used:
|
Criteria and measures of success | Identifies three main types of impact:
|
Logic model/theory of change | ESRC’s overarching impact model published in the Taking Stock report They have also experimented with the Payback model; Content, Process and Context models; and logic chains models |
Mechanisms of knowledge mobilisation tested | No specific knowledge mobilisation mechanisms are evaluated |
Contextual evaluation | The reports note the importance of the fit between research content and context, the role of serendipity and the problem of disentangling multiple contributions to an impact |
Main findings | The following key determinants of impact are highlighted:
|
Michael Smith Foundation for Health Research (MSFHR): evaluation 1 | |
Topic/scope of evaluation | Defining MSFHR’s KT role126 |
Evaluation methods | MSFHR determined its KT role through:
|
Criteria and measures of success | Developed 12 KT planning assumptions that were used to guide the work (see table 2 in the 2012 paper) |
Logic model/theory of change | Developed a model of five key function areas for agencies involved in KT:
|
Mechanisms of knowledge mobilisation tested | This was not a test of a particular KT strategy or intervention, but the paper includes brief evaluative comments on two KT programmes:
|
Contextual evaluation | Narrative comments on the health sector context, particularly the knowledge-to-action gap Comments on what other agencies in the sector are doing to support KT |
Main findings | Concludes that MSFHR should focus on four broad areas:
|
MSFHR: evaluation 2 | |
Topic/scope of evaluation | A KTE framework for Health of Population Networks (HOPN)186 |
Evaluation methods | Strategy developed by:
|
Criteria and measures of success | Five-point research-to-action paradigm used to categorise existing HOPN activities:
|
Logic model/theory of change | See entry in previous column |
Mechanisms of knowledge mobilisation tested | This was not a study to assess specific knowledge mobilisation mechanisms but it does include some evaluative comments on the existing KTE activities |
Contextual evaluation | Investigation of the British Columbia context as part of developing the strategy |
Main findings | The report concludes that:
|
MSFHR: evaluation 3 | |
Topic/scope of evaluation | MSFHR uses the CAHS framework for assessing impacts and returns on research investment110 |
Evaluation methods | CAHS framework developed by:
|
Criteria and measures of success | A menu of metrics and indicators developed for each of the main categories of impact:
|
Logic model/theory of change | The CAHS impact framework demonstrates how the five main categories of impact (see previous column) are linked |
Mechanisms of knowledge mobilisation tested | The framework is primarily an impact assessment framework but the report suggests that it has the potential to be used in the assessment of specific knowledge mobilisation mechanisms |
Contextual evaluation | There is recognition that more needs to be done to separate the contribution of health research from other causal factors |
Main findings | The main finding is that it is possible to elaborate an impact assessment framework that is good enough to be used by all health research funders in Canada |
Social Care Institute for Excellence (SCIE): evaluation 1 | |
Topic/scope of evaluation | Overview of SCIE’s profile and impact187 |
Evaluation methods |
|
Criteria and measures of success | Website activity Recognition of SCIE in sector as a whole Extent of and attitudes to ICT use within the sector + see entry below for impact evaluation criteria |
Logic model/theory of change | Not evident |
Mechanisms of knowledge mobilisation tested | See entry below |
Contextual evaluation | Evaluation of e-readiness Evaluation of SCIE’s marketplace position Some comments about organisational receptiveness but impact evaluation not cited as a source of this assessment (see next entry) |
Main findings |
|
SCIE: evaluation 2 | |
Topic/scope of evaluation | Independent impact evaluation of SCIE conducted in 2006 and reported in Goldman 2007188 |
Evaluation methods | Survey of 13 stakeholder groups Three case studies in the areas of fostering practice and social work education (using telephone interviews, focus groups and questionnaire) Evaluation based on user groups’ self-reported uses of SCIE’s resources and support |
Criteria and measures of success | Long-term plan to assess whether or not SCIE makes a difference to experience of adults and children who use care services, but evaluation focuses on intermediate outcomes:
|
Logic model/theory of change | Implicit theory of change:
|
Mechanisms of knowledge mobilisation tested |
|
Contextual evaluation | General comments about difficulty of disentangling impact of SCIE from other factors affecting practice and organisational change but no evidence of an evaluation of context |
Main findings | Survey findings:
|
SCIE: evaluation 3 | |
Topic/scope of evaluation | Independent evaluation of SCIE’s user and carer engagement and operation of Partners’ Council (conducted in 2010–11 and reported in 2012)189 |
Evaluation methods |
|
Criteria and measures of success | General views of interviewees |
Logic model/theory of change | Nothing explicit but evaluation recommendations indicate an emerging theory of change for improving user/carer engagement, which involves moving towards a model of co-production |
Mechanisms of knowledge mobilisation tested | User and carer engagement |
Contextual evaluation | General comments about enormous challenges in the changing context/new climate within which SCIE works, but there was no discussion or evaluation of this in the 2012 report |
Main findings | General view that SCIE has a user-engagement ethos and practises what it preaches Need for engagement to have a greater influence on corporate priorities Need strengthened internal support for engagement and an organisational culture that supports participation Merit in rethinking engagement as co-production (that starts from users/carers being considered as equal partners and co-creators of practices and programmes Online forums bring some benefits but online dialogue not easy to support and is often underutilised |
ZonMw (The Netherlands Organisation for Health Research and Development): evaluation 1 | |
Topic/scope of evaluation | Commentary on a report (in Dutch) that included (1) a review of funded studies, (2) a review of published research on implementation interventions and (3) a study of implementation infrastructure in the Netherlands190 |
Evaluation methods | The review of ZonMw-funded studies analysed documentation on project plans and results for 79 studies (spread across health-care sectors and types of research) The review of published research analysed 141 systematic reviews of implementation interventions The study of implementation infrastructure was a qualitative study based on 28 interviews, two expert meetings and a document analysis |
Criteria and measures of success | No explicit criteria for measuring success are discussed in the commentary paper |
Logic model/theory of change | No theory of change or logic model is discussed |
Mechanisms of knowledge mobilisation tested | The report sought to identify the factors associated with the success of implementation programmes |
Contextual evaluation | Emphasises the importance of context and argues that the following contextual factors need to be considered in planning, describing and evaluating implementation interventions:
|
Main findings | The review of completed studies found:
The study of implementation infrastructure found that each subsector appeared to have its own infrastructure Concern was expressed that:
|
ZonMw: evaluation 2 | |
Topic/scope of evaluation | ZonMw’s co-ordination of 10 large-scale improvement programmes that used research into effective treatments and care interventions to decide which improvements to make191 |
Evaluation methods | Case study of overall co-ordination through studying 10 programmes Data collected via:
|
Criteria and measures of success | Defined four different approaches for linking research to practice in implementing large-scale improvement programmes:
Criteria for judging the success of the improvement programmes were not clear on paper |
Logic model/theory of change | No systematic theoretical framework was used to guide data collection |
Mechanisms of knowledge mobilisation tested | ZonMw oversight and governance of the delivery of the improvement programmes by programme organisations, who worked with services to help them improve ZonMw programme co-ordinator role |
Contextual evaluation | Narrative discussion of analysis of context for each programme, which found that:
|
Main findings | See previous row for findings in relation to context The improvement programmes signalled a shift towards a greater implementation role for ZonMw. Its overall approach to linking research to practice in the improvement programmes is described as network governance. The weight of evidence suggests that the network structure and process worked: it enabled research to be incorporated into the implementation process The 10 summary lessons for organising successful large-scale improvement programmes included:
|
ZonMw: evaluation 3 | |
Topic/scope of evaluation | Study of six of the 10 improvement programmes referred to in the previous entry to test hypotheses about the factors that may predict successful implementation192 |
Evaluation methods | A literature review to identify the 17 hypotheses for testing. These were discussed and revised with the independent evaluators of the six programmes Hypotheses tested through a ‘quantitative summarisation of evidence for systematic comparison’ The co-ordinator, programme manager and evaluation team for each of the six programmes independently decided on a score for implementation success and also used a scoring system to express their understanding of the evidence in relation to each of the hypotheses. These ratings were subsequently discussed and refined |
Criteria and measures of success | A guide was developed for assessing the evidence in relation to each hypothesis. It was expected that evidence would be drawn from the independent evaluations of the six programmes and other programme data |
Logic model/theory of change | A theory-based approach to evaluation, based around the hypotheses developed from the literature review and discussions |
Mechanisms of knowledge mobilisation tested | 17 hypotheses tested were grouped into three categories:
|
Contextual evaluation | Looked at context factors via the hypotheses falling under this heading |
Main findings | Factors that most influenced programme implementation:
Research designs that provided less certain answers to a number of questions were perceived to be of more use to decision-makers than those which provided one answer to the does it work question |
Discussion of the main findings from the substantive evaluations
The main findings to emerge from our analysis of these evaluations are summarised in this section. We consider first the focus of evaluation activity and the evaluation frameworks and methods before moving to a review of the main findings to emerge from the evaluations. We were aware of two further substantial bodies of evaluative work relating to knowledge mobilisation activities in organisations included in our study: (1) the NIHR-funded external evaluations of the CLAHRC programme,193 which complement the wide range of internal evaluations conducted by individual CLAHRCs; and (2) the evaluations published in Implementation Science in 2008–9194 of the US Department of Veterans Affairs Quality Enhancement Research Initiative (QUERI). As these are both substantial bodies of work in their own right, we do not seek to reproduce them here.
In examining agency evaluation reports we see that the focus of much of the evaluation activity is on assessing either the impact of the agency as a whole or the impact of one or more of its main programmes of activity. These impact evaluations seek to describe, evidence and, to some extent, measure impact. Generally, less attention has been paid to evaluating how these impacts have been achieved, although there are usually some observations about the factors associated with positive impacts.
There are some evaluation studies of particular knowledge mobilisation approaches and activities. These include evaluations of partnership working, of linking and knowledge brokering roles, and of programmes that fund and support service improvement.
The evaluations of impact were not always guided by an explicit evaluation framework but in the majority of cases they were underpinned by a theory of change or logic model. The logic models identify a chain of interlinked activities and potential results. As such, they distinguish between engagement activities (and user perceptions of these), intermediate outcomes (e.g. changed awareness and understanding), broader outcomes (e.g. changed policy or practice) and final impacts (e.g. improved health and well-being). In many of the models, indicators of success were identified for each stage in the chain.
Most studies recognised the importance of context, and the difficulty of disentangling multiple contributions to an impact. However, relatively few of the evaluation studies included an evaluation of context. Those that did tended to provide a high-level, narrative analysis of the agency’s external environment rather than treating it as in dynamic interaction with the agency activities.
In general, the evaluations used case study methodologies. Within these case studies, data collection and analysis methods frequently included analysis of agency activities and user feedback on these activities; document review (including policy documents and media coverage); and stakeholder surveys, interviews and focus groups. Many of the evaluations focused mainly on the last of these categories, and they usually relied on stakeholders’ self-reported use of an agency’s research resources and services.
The evaluation studies generally confirmed the observations and insights developed from the literature in Chapter 3. These include the need to tailor research-based resources to the preferences and access needs of different audiences, and that interactive knowledge mobilisation approaches and practices are generally more effective than passive dissemination (although there were examples of service organisations using an agency’s evidence briefings without any interaction with that agency). The use of multiple knowledge mobilisation activities is also confirmed as generally being more effective than the use of single activities. Several of the studies found that agencies, programmes and projects benefit from creating and supporting ongoing networking activities between researchers, policy-makers and practitioners.
Knowledge mobilisation activities that adopted a partnership approach, involving researchers and practitioners working together to facilitate evidence use, were also generally found to be successful where there are ‘meaningful partnerships’ and clear partnership working processes. There is, however, a downside to partnership working and this includes its resource-intensive nature. Some agencies have also found it difficult to find a good balance between maintaining agency independence and directly supporting service providers.
The methods used in research–practice partnerships vary, and the evaluation studies reported successful applications of an action-planning methodology and the use of service improvement methods. Some of the evaluation studies emphasised the need to appreciate that research-based knowledge is not simply adopted by service providers; instead, knowledge changes during the process of being used in practice (again, consistent with the literature reviewed in Chapter 3). Indeed, one of the evaluation studies found that successful partnership working involved combining different types of knowledge in an action-planning process.
Some of the evaluation studies provide general support for a co-production approach that involves research users in all stages of the research and research use process. However, one of the evaluation studies notes that many of the more active strategies to facilitate research use tend to be project based and this has implications for the sustainability of these activities once project funding comes to an end.
Several of the evaluation studies comment on the need for agencies to work with other organisations in their sector in order to influence the environment in that sector. This is seen as being an important route to increasing user demand for research-based knowledge. Liaison with other organisations in the sector (research funders, research producers and research intermediaries) is also deemed necessary to avoid unnecessary duplication of knowledge mobilisation activities (resonating here with the emphasis on ‘connections and configurations’ explored in the conceptual map; see Chapter 3).
Overall, then, evaluation activity around knowledge mobilisation work carried out by our agencies is rather uneven, but what there is does provide some coherent messages of practical value to agencies. The messages from this empirical work in the grey literature are largely congruent with the conceptual literature reviewed in Chapter 3, and begin to provide a platform for local innovation and experimentation.
Formative learning/practical experience
We asked interviewees what they had learned about knowledge mobilisation from practical experience in addition to, or alongside, more formal evaluations. Some of the insights offered related to domains identified from the literature and represented in the conceptual map (see Chapter 3), most often the domains relating to context, roles and connections.
The strongest theme from the interview data on learning from practical experience was observations on context. These included generic observations: the importance of attending to the research users’ context and challenges, and tailoring the knowledge mobilisation approach accordingly; the need to use local contextual knowledge and to take advice on what ‘levers’ to pull; and the importance of ‘riding’ policy waves and capitalising on other political developments:
What we have done therefore is work on some of the NICE quality standards. Some of it’s been linked to CQUINs [Commissioning for Quality and Innovation payments], but some of it has been additional to CQUINs, and with hindsight has enabled our organisations to get ahead of the game. So it’s about understanding what the national drivers are, what the pressures are on an organisation, as an example, and capitalising on those.
23
Interviewees also made specific observations about the health-care, education and social care sectors in which they worked. One interviewee commented on how they had learnt much about how to do implementation in the very turbulent context of the current NHS and how this required persistence and adaptability:
The Trust wanted us to use a particular facilitative model of quality improvement teams who would be ward-based and who would lead on the implementation of national quality standards, to get the evidence into practice. And this had been agreed at various levels of the organisation and written into our proposal, and almost at the point we were ready to start, the person in the Trust who was responsible for facilitating the QIPP [Quality, Innovation, Productivity and Prevention] teams was out of a job, and the pressures on the wards were such that they couldn’t take forward this initiative. So we very quickly changed to a best practice champion model and drew upon the research on that.
23
The challenges of working in the NHS context also included the onerous approval process in some organisations, delays in granting honorary contracts to enable secondments and pressures of workload in the NHS which made it harder to release staff to work with other organisations on research or knowledge mobilisation. Taking account of such challenges was seen as essential in developing effective knowledge mobilisation strategies.
One of the most pervasive challenges across sectors was the stamina needed to make the case for research application and implementation:
I think one’s got to just be brave about that, you know . . . keep working at it, keep struggling with it, keep playing with the language and the different ways of describing what we do, and keep working at persuading people why it’s important. You have to have a bit of a passion for it, I think. The best folks in this field are like that . . . they have a real commitment to passing that on to other people because they feel it is worthwhile and valuable and useful.
21
Part of that process of persuasion meant in the early days ‘going the extra mile’ to build up a reputation and to persuade people of the value of the work. It also required sustained humility on the part of agencies: not claiming to know all of the answers but instead working in an atmosphere of support and co-operation.
Another common challenge was seen as arising from the constant ‘redisorganisation’195 of all three sectors, with increasing fragmentation, shifting relationships between central and local government and the introduction of legislative and policy changes that could have negative or (sometimes) positive impacts on encouraging research use. For example, one interviewee described a recent growth in interest in the use of evidence in the education sector in the UK: ‘There’s been a huge shift I think. Things like when the Cabinet Office got interested, that created more high-level political interest . . . particularly in the last 18 months [there have been] a lot more of these networks and groups and coalitions coming together’ (2). Waiting for, and responding to, propitious circumstances was seen as crucial.
However, in the public health sector in England, many developments were perceived to have been disadvantageous. Significant changes had been made in the public health infrastructures, with public health moving from the NHS to local authorities and with subsequent changes to the role of Directors of Public Health. One interviewee commented on how it was challenging to advocate for evidence-informed policy in this new environment where elected local council members had considerably greater power than senior public health employees:
There have been lots of conversations about how, in the light of that, do you package, present, shape research, to influence those who actually now take the decisions. And the culture of the NHS is basically sort of built on expertise and understands it. It doesn’t always follow it but it understands evidence-based or evidence-informed practice. [Now it’s] ‘Oh where was that done? That would never work here. Not with my constituents. I’ve lived here for years, and I know what would work.’
24
There were also issues for public health initiatives around matched funding for knowledge translation initiatives: it was now local authorities rather than health authorities who would need to provide matched funding for these projects, and some were unable to do that.
It was not just the service context that made knowledge mobilisation challenging but also the university context. One interviewee described how the different histories and missions of ‘old’ and ‘new’ universities made it difficult to form knowledge mobilisation collaborations across different institutions:
Some of the universities are about excellence in research and research outputs and grant and income generation, and others are more about being in and of their community and responsive to their community needs. So they have very different missions and visions.
24
The university infrastructures and reward systems also formed a barrier to translational research: ‘. . . what researchers and academics are rewarded for is not the kind of work you might engage in if you wanted to do knowledge exchange, translational research’ (24).
In contrast, a more optimistic vision was given by another organisation:
I think people are really deeply sort of values-driven and very committed. Some of them I think are beyond redemption in terms of just absolutely wanting to squirrel themselves away and just research for the sake of research and just talk to their peers, and they don’t give a monkey’s about anything else. But I think there’s a much greater sort of proportion, probably working with younger researchers who are just so creative and enthusiastic about how they can be an academic but be really connected to the world as well.
26
In addition to talking about their own sectors and about universities, interviewees made comments about the state of knowledge mobilisation as a field. Some commented on positive developments, observing that there was increasing recognition from policy-makers and from research funders that producing good evidence and facilitating its implementation required time and adequate resources. Others felt that there was still much progress to be made and some suggested that the knowledge mobilisation field itself had stalled:
There’s still a sort of disseminate and hope attitude. It’s surprising how many funders are really unimaginative about this. They’ve picked up that you need an executive summary and you’ll probably want to put it on a website and you might want to give a seminar or two. But thinking more creatively about [it] . . .
21
Some sensed that there was perhaps creeping disillusionment among some policy-makers and practitioners that researchers and universities did not provide ready answers. One interviewee suggested that it was important to learn from other contexts like international development or community development that faced similar challenges. It often seemed that respondents were concerned to keep learning and changing as a way of keeping up momentum for their knowledge mobilisation ‘mission’.
Another strong theme from practical learning concerned issues around people and relationships. Several interviewees emphasised the importance of using the ‘right people in the right role’ for knowledge mobilisation activities. Some went further in suggesting that it was most productive not to ‘waste time’ with researchers or practitioners who had no appetite or aptitude for such work. Rather than forcing people into knowledge mobilisation activities or assuming that practitioners with ‘likely’ job titles would be the most engaged, it was felt to be more realistic and fruitful to assess who had the skills, personality, experience and enthusiasm for this challenging work:
You must have the right people in place: people who can cope with ambiguity. People who can work across professional boundaries . . . If the person requires a set protocol of how to get things done, they aren’t the right person for this work.
4
I think that this kind of work is really tricky, I think it’s the kind of work that needs to be done by people who are already pretty experienced, pretty knowledgeable, pretty confident, about their own professional skills, because you’re always working right on the edge of your comfort zone . . . and you just can’t expect very young and inexperienced people to be able to engage confidently with the kind of challenges practitioners or senior policy-makers have.
21
Respondents had also learned that it was important to value different kinds of roles and expertise that people could bring to engagement work, and to recognise that some of these people might not label themselves as knowledge mobilisers:
That’s my incredibly simplistic conclusion that people who are interested in how other people make meaning and how you negotiate meaning and who have real emotional intelligence, but also have a really strategic brain, they do amazing work, and they’ve never thought of themselves as knowledge mobilisers.
26
It was also seen as important for long-term sustainability of initiatives to have people in paid posts and not to rely solely on goodwill:
You can only get so much for free really, I mean you can only get so much from pro bono and voluntary inputs and the movement, the real momentum, has come when we’ve got some money to be able to employ people to get more money to get these projects off the ground.
2
Interviewees emphasised the importance of engaging potential research users. This required being led by the agenda of those users and recognising that the research evidence that you wanted to share was only ever one part of the knowledge landscape for those users:
You’re always the broker, you’re never the leader. So if you’re a knowledge mobiliser you’re always enabling rather than shoving stuff to people.
48
One interviewee from an organisation that produced evidence-based guidelines endorsed this position: it was necessary to start from the end point of what managers and practitioners were trying to accomplish, and then using guidelines and associated social networks to help them to achieve that. Making the guidelines an end in themselves would not work.
Congruent with both the formal evaluations done by agencies and much of the literature, several interviewees commented on the benefits of co-producing research with policy-makers, practitioners or service users. For example, in one agency, meeting service users and parents regularly on the research consultation panels had proved a valuable way to learn about other issues that they were facing that might not otherwise have come to the researchers’ attention: this process helped researchers to identify possible new areas for future research. However, interviewees also emphasised the considerable challenges that co-production posed. These approaches required commitment, persistence and sufficient resources:
If there’s one thing I’ve learnt over the last couple of years, it’s the vital importance of doing this alongside the people who are using the information, and that is expensive and it’s hard work and it’s time-consuming to do properly rather than tokenistically . . . so there’s some messages in there for funders about the real costs, and that you can’t do this half-heartedly. It’s pointless unless you’re really prepared to do it properly.
21
Interviewees emphasised that it was not sufficient to bring a researcher and policy-maker together and expect that to produce changes in how health research evidence was used; considerable hands-on support was needed to make these connections fruitful and to address the challenges: ‘. . . there is way more at stake, there’s personal relationships, there’s how do you work as a team, there’s the reality of each other’s world and so on’ (34).
Nurturing networks thus also emerged as an important consideration given attention by many agencies. In particular, they saw the importance of having sufficient funds to allow development and evaluation over a realistic time period:
You need to build it and have activity before you open the door, you can’t launch something that’s got tumbleweed drifting [laughs]. And it’s like a restaurant, you’re more likely to go in if there are already diners.
28
A further piece of advice that interviewees gave was the importance, where possible, of building up long-term relationships with policy-makers and practitioners. Although this was difficult to achieve in political and service contexts with rapid staff turnover, it did bring considerable advantages in enabling trust to develop and making broader, more meaningful and frank discussions possible:
I’m still an outsider of course, but because I’ve known them for such a long time now . . . there’s much, much more honesty going on, so there’s a trust situation being established, and so we learn now much more about the background to a particular policy question than they probably would have shared about 6 years ago. And it’s actually quite useful to know that . . . because then it informs how you shape your work essentially.
3
I think building those relationships around, you know, multiyear research projects is probably important, but not because of how research from those partnerships instrumentally informs decision-making, but how, over time, relationships are built up that allow policy-makers and stakeholders to efficiently access researchers who can then open up a whole lot of research to them on other issues.
36
Finally, so much of knowledge mobilisation work involves working with and through others that getting buy-in was seen as essential. This needed to be real commitment, often evidenced by cash investment:
We truly believe that the organisations need to have some skin in the game, so to speak: that they should be contributing. Our philosophy is that everybody has to contribute financially. These are collaborations, both in kind but also in real dollar terms.
5
Taken together, these observations suggest that agencies have clearly benefited from extensive experiential learning and now have much tacit knowledge that can, given prompting, be made explicit. Given this, it seems unfortunate that there is so little capitalising on the potential for interagency learning as agencies develop their options.
Drawing conclusions on agency learning
Agencies demonstrate a keen awareness of the need for learning from diverse sources, focusing on their own experiential learning and (nascently) from more formal evaluation work:
Find the point on the horizon, the North Star or whatever, depending on how much you want to crane your neck, and where do you wanna go . . . And then think about how that translates to building the road to get there and what the bricks need to be, and also what the course might look like, because it’s not gonna be a straight line.
9
While formal evaluation work has often proved difficult to mount and sustain, some clear messages are emerging that chime with the messages from the more conceptual literature. In addition, there is now rich formative experience that agencies have developed through experience that has the capacity to shape new initiatives and to enable the avoidance of pitfalls in the development of new strategies.
Concluding remarks
The interview data provide a rich account of the dynamics of knowledge mobilisation strategies in the agencies that we consulted. As these agencies were explicitly chosen because they were thought to contain work of unusual scale or innovation, we can suppose that other smaller agencies, perhaps less advanced or less ambitious, will have at least as many challenges, tensions and uncertainties as those reported here. What emerges is a complex picture of muddling through. Some key messages from the literature (such as the importance of situating knowledge; the need for interactivity; and the requirement to facilitate uptake) do appear to have left the academy and to be having wider impact. Other work, such as the creation of new models, theories and frameworks, and the detailed rationales underpinning these, has made less of an impact, in part because many of the models and frameworks lack detail and are not easy to operationalise. Agencies have a strong streak of pragmatism in how they create strategies around knowledge mobilisation.
This work also seemed to highlight some significant missed opportunities. While many informants highlighted the importance of evaluation, most also noted the extreme difficulties and lack of any real accumulations of robust evidence to support knowledge mobilisation interventions. Learning from informal experience was more often seen as a better guide, but converting tacit knowledge into explicit knowledge was a challenge, as were building and sharing cumulative expertise. The very limited sharing of expertise across the sector – that is, learning from others – was also perhaps surprising given the field, although we note the recent development of the cross-sectoral UK Knowledge Mobilisation Forum, which held its first meeting in London in February 2014. Finally, capitalising on the skills, energy and commitment of the public and service users remains on the fringes of agency thinking.
Chapter 5 Findings from the web survey
Introduction
The aim of the web survey was to provide a broader assessment from a fuller range of agencies that built on the emerging findings from the literature and the interviews. Further details of the survey development and administration are reported in Chapter 2. The final version of the survey instrument is available from the authors on request. The survey was sent to 186 organisations and, after two rounds of follow-up, we received 106 responses, a response rate of 57%.
This chapter sets out the findings using the same structuring as used in describing the survey tool in Chapter 2. That is, we open with some brief descriptive data on the location and nature of the agencies responding, and this is then followed by data on six aspects of knowledge mobilisation:
-
terminology used around knowledge mobilisation
-
knowledge mobilisation activities used by the agencies
-
models and frameworks used by the agencies in developing their work
-
propositions for effective knowledge mobilisation
-
key factors underpinning agencies’ knowledge mobilisation plans
-
evaluating knowledge mobilisation activities and impact.
Data are mostly presented as percentages selecting from a pre-set list, Likert-type scale (five-point: from ‘strongly agree’ to ‘strongly disagree’) or other indicator of importance (e.g. ‘often’/’sometimes’/’never’; or ‘very important’/’fairly important’/’not that important’). Data in tables are ordered so that rows nearer the top reflect the higher frequency with which those items were selected. This ordering process is usually based on summing the percentage reports for the first two options (e.g. summing ‘often’/’sometimes’; or ‘strongly agree’/’agree’). Percentages in the tables may not sum to 100 because of rounding.
Location and nature of the agencies
Around two-thirds of respondents were based in the UK (68%). Almost all the remainder of the respondents (around one-third) also came from English-speaking countries, including the USA (11%), Canada (11%), Australia (7%) and Ireland (2%); 1% of respondents came from continental Europe.
Just over half of respondents were active in the health sector (55%), with around one-fifth being in education (18%) and 5% in social care. Fifteen per cent of respondents identified themselves as cross-sector organisations.
The survey asked respondents to choose one term out of three that most closely fitted their organisation: research producer, research funder or research intermediary. Approximately two-fifths of respondents identified their organisation primarily as a research producer (42%), with a similar proportion identifying their organisation most closely as a research intermediary (39%). The remaining one-fifth self-identified as a research funder (19%). Not all respondents were comfortable selecting just one of these labels, and around one-fifth of respondents used the free-text comment box to elaborate further. Several respondents commented that their organisation spanned two or more of these categories, for example ‘we produce research, we act as an intermediary and we disseminate research’, or offered fuller descriptions of activities that could (in our terms) be classed in the ‘research intermediary’ category:
We work with local authorities to support them to use evidence informed practice, so supporting them to make best use of evidence.
[We are] more of a program/tool kit developer and catalyst for change (based on sound research and different forms of knowledge and evidence).
Terminology around knowledge mobilisation
The survey section on terminology explained that the survey was ‘using the term knowledge mobilisation to cover activities aimed at sharing research-based knowledge’ and asked respondents to indicate all other related terms commonly used in that organisation. The list of terms was drawn from the key reviews.
With the exception of the term ‘knowledge interaction’ (just 2%), the majority of the terms listed were selected as ‘commonly used’ in their organisation by at least one-quarter of respondents (Table 6). Most commonly used were the terms ‘evidence-based policy/practice’ (79% of respondents) or ‘getting evidence into practice’ (75%). Other commonly used terms included ‘knowledge exchange’ (61%) and ‘knowledge transfer’ (61%). Around one-fifth of respondents (22%) suggested other terms (other than those that were listed) that were in common use in their organisation. These included a range of terms using the word ‘research’ (e.g. research translation, research into practice, research implementation, research utilisation or research uptake) or the word ‘knowledge’ (e.g. knowledge management or knowledge integration). Respondents also referred to implementation science and quality improvement and some gave lists of the multiple terms used in their organisation:
Sharing, learning, networking, co-production, action research . . . and also reference to ‘ideas’ and ‘solutions’ and ‘expertise’ rather than only knowledge or evidence.
Term | Respondents, % |
---|---|
Evidence-based policy/practice | 79 |
Getting evidence into practice | 75 |
Evidence-informed policy/practice | 65 |
Knowledge exchange | 61 |
Knowledge transfer | 61 |
Knowledge translation | 45 |
Research use | 37 |
Knowledge sharing | 34 |
Knowledge into action | 28 |
Knowledge mobilisation | 28 |
Knowledge utilisation | 19 |
Knowledge interaction | 2 |
Other (please state) | 22 |
As will be seen later (see Table 14), the diversity of terms in use is not universally seen to be a problem, and, in many cases, the debate engendered may be healthy: ‘we tend to talk about evidence rather than knowledge, although we have lots of debates internally about that’. An academic preoccupation with terminology1,3 is sometimes seen as unhelpful: ‘I’m coming to the view that it’s all a bit jargony . . .’
Knowledge mobilisation activities
A major section of the survey asked respondents about the knowledge mobilisation activities carried out in their organisation and with what frequency these were used. The list of activities was compiled from the literature and from the interviews, and was spread over three questions to improve the visual presentation of the survey questions on each screen and to reduce respondent fatigue (a total of 37 items). In this summary of results, we are combining the findings from those three questions and we are grouping the knowledge mobilisation activities into six broad categories based on the long-standing ‘push, pull, linkage and exchange’ framework:10,53
-
push activities: creating and disseminating research products (nine items)
-
pull activities: encouraging local demand for research evidence; building local capacity for research use; facilitating local research implementation (seven items)
-
linkage and exchange activities: knowledge brokerage; linking across different environments (eight items)
-
other activities involving practitioners or policy-makers (four items)
-
activities involving patients, service users or members of the public (five items)
-
advocatingand advancing knowledge mobilisation (four items).
‘Push’ activities
Of the nine different types of ‘push’ activities shown in Table 7, agencies in the survey were most likely to produce publications, other written materials and tools, digested research summaries or guidelines, or to provide ‘rapid response’ research synthesis services (at least 80% responded ‘often’ or ‘sometimes’ to these items). Fewer than one-third of respondents often used social media to create debate, although over one-third sometimes did. One-quarter of respondents often used social marketing approaches to communicate research findings, change ideas or promote evidence-based change. Around half of respondents provided research-based commentary in response to issues in the news, although only about 10% did so often. Around two-thirds of respondents provided webinars for practitioners and policy-makers or planned to do so in future. Using the arts (e.g. drama, music, narrative or visual arts) to communicate research findings was rare, although around one-third of respondents sometimes produced videos or animations.
Response item | Often | Sometimes | Planned for the future | Never/does not apply |
---|---|---|---|---|
Producing publications, other written materials or tools aimed at practitioners or policy-makers (n = 99) | 78% | 19% | 1% | 2% |
Creating digested research summaries and/or guidelines (e.g. mythbusters, fact sheets) (n = 99) | 59% | 28% | 4% | 9% |
Providing ‘rapid response’ research synthesis services to policy makers or practitioners (n = 98) | 38% | 42% | 9% | 11% |
Creating debate using social media (n = 100) | 29% | 38% | 15% | 18% |
Using social marketing approaches to communicate research findings, change ideas or promote evidence-based change (n = 95) | 25% | 35% | 11% | 29% |
Providing research-based commentary on issues in the news (n = 97) | 12% | 42% | 12% | 33% |
Producing videos or animations to communicate research findings (n = 97) | 15% | 36% | 22% | 27% |
Providing live and archived webinars for practitioners and policy-makers (n = 97) | 20% | 26% | 23% | 32% |
Using the arts (e.g. drama, music, narrative, visual arts) to communicate research findings (n = 96) | 5% | 24% | 7% | 64% |
‘Pull’ activities
Facilitating the implementation of research findings in practice or policy settings was the most common ‘pull’ activity identified by agencies in the survey (93% sometimes or often did this), along with developing local collaborations for innovation and improvement (78% sometimes or often) (Table 8). Providing training to build research awareness or critical appraisal skills among practitioners or policy-makers was more common than providing input into pre- or post-registration training for practitioners (77% vs. 44%). Over half of the respondent agencies (58%) used participatory research methods (e.g. action research), although fewer than one-quarter did so often (22%). It was more common to provide local consultancy services on policy or practice issues (64%), although one-third of agencies never did so.
Response item | Often | Sometimes | Planned for the future | Never/does not apply |
---|---|---|---|---|
Facilitating the implementation of research findings in practice or policy settings (n = 101) | 55% | 38% | 2% | 5% |
Developing local collaborations for innovation and improvement (n = 97) | 39% | 39% | 3% | 19% |
Providing training for practitioners or policy-makers to build research awareness or critical appraisal skills (n = 98) | 28% | 49% | 7% | 16% |
Publicising impact stories on successful knowledge mobilisation initiatives (n = 97) | 29% | 42% | 14% | 14% |
Providing local consultancy services (e.g. rapid review, research, data analysis, change management) on policy or practice issues (n = 98) | 31% | 34% | 3% | 33% |
Using participatory research methods, including action research or facilitated implementation (n = 98) | 22% | 36% | 13% | 29% |
Providing input into pre- and post-registration training for practitioners (n = 96) | 13% | 31% | 8% | 48% |
Linkage and exchange
Of the eight activities we have categorised in this analysis as ‘linkage and exchange’, the activities that were performed most often were organising events that bring researchers together with policy-makers and practitioners (97% often or sometimes); facilitating mixed networks of researchers, practitioners and policy-makers (89%); and brokering relationships between these groups (92%) (Table 9). It was less common to broker relationships between researchers and journalists: one-third of organisations never did this and only 15% did so often. It was also less common to arrange secondments from or into the organisation, with 51% and 35% of respondents, respectively, saying that this never happened. Around two-thirds of organisations employed staff in dedicated intermediary roles (e.g. as knowledge brokers) either often (41%) or sometimes (26%). Fostering formal partnerships between university departments and non-university organisations was part of knowledge mobilisation activity for around four-fifths of organisations, with around one-third of organisations often using this approach.
Response item | Often | Sometimes | Planned for the future | Never/does not apply |
---|---|---|---|---|
Organising events that bring researchers together with policy-makers and practitioners (n = 101) | 59% | 38% | 0% | 3% |
Brokering relationships between practitioners, policy-makers and researchers (n = 97) | 55% | 37% | 2% | 6% |
Facilitating mixed networks of researchers, practitioners and policy-makers (n = 96) | 57% | 32% | 4% | 6% |
Fostering formal partnerships between university departments and non-university organisations (n = 96) | 36% | 45% | 6% | 13% |
Employing staff in dedicated intermediary roles (e.g. knowledge brokers) (n = 98) | 41% | 26% | 6% | 28% |
Brokering connections between researchers and journalists (n = 98) | 15% | 45% | 7% | 33% |
Arranging secondments of staff from other organisations into your organisation (n = 100) | 9% | 44% | 12% | 35% |
Arranging secondments of staff from your organisation into other organisations (n = 99) | 6% | 35% | 8% | 51% |
Other activities involving practitioners or policy-makers
It was most common for agencies to involve practitioners and policy-makers at each stage of the research process: almost half of agencies reported that they did so often (49%) and almost as many again reported doing so sometimes (46%). Similarly, high proportions of respondents reported involving practitioners and policy-makers in collaborative research and/or communicating research findings (Table 10). It was less common for agencies to facilitate or fund peer networks or communities of practice among practitioners and policy-makers, and around one-quarter of respondents indicated that their organisation never did this (24%).
Response item | Often | Sometimes | Planned for the future | Never/does not apply |
---|---|---|---|---|
Involving practitioners or policy-makers in problem-definition and in prioritising research areas (n = 99) | 49% | 46% | 1% | 3% |
Involving practitioners or policy-makers in interpreting and communicating research findings (n = 99) | 52% | 39% | 5% | 4% |
Involving practitioners or policy-makers in collaborative research or co-production (n = 99) | 45% | 39% | 5% | 10% |
Facilitating or funding peer networks or communities of practice among practitioners and policy-makers (n = 96) | 27% | 41% | 8% | 24% |
Activities involving patients, service users or members of the public
Knowledge mobilisation activities involving patients, service users or members of the public were reportedly much less common than activities involving practitioners or policy-makers (Table 11). Only one of the five types of activities in this category was done ‘often’ by more than one-third of respondents: producing publications, other written materials or tools aimed at lay audiences. The more usual pattern was for agencies to report that they only sometimes did activities with these groups. Indeed, responses to a later question in the survey (see Activities and focus for effective knowledge mobilisation) showed that over 80% of respondents believed that the role of service users/patients in knowledge mobilisation was currently underdeveloped, suggesting that many respondents believed that the current situation could be improved. Hosting ‘Cafe Scientifique’ or similar public debates was particularly rare: nearly two-thirds of respondents stated that their organisation never did so.
Response item | Often | Sometimes | Planned for the future | Never/does not apply |
---|---|---|---|---|
Producing publications, other written materials or tools aimed at lay audiences (e.g. online resources, articles in consumer magazines or newspapers, etc.) (n = 100) | 37% | 47% | 6% | 10% |
Involving patients or service users in problem-definition and in prioritising research areas (n = 98) | 24% | 55% | 4% | 16% |
Involving patients or service users in interpreting and communicating research findings (n = 98) | 22% | 43% | 7% | 28% |
Involving patients or service users in collaborative research or co-production (n = 96) | 20% | 45% | 8% | 27% |
Hosting ‘Cafe Scientifique’ or similar public debates (n = 97) | 9% | 20% | 7% | 64% |
Advocating and advancing knowledge mobilisation
Almost all respondent agencies stated that they often (52%) or sometimes (42%) actively made the case for the value of research-based knowledge in policy and practice (Table 12). Over half sometimes or often funded or conducted projects to advance the science of the knowledge mobilisation field (64%). However, the majority of organisations did not provide post-project funding for knowledge mobilisation activities (56%) and only around 10% of agencies often did so; this may in part reflect the relatively small proportion of respondent agencies that identified themselves as research funders (around one-fifth).
Response item | Often | Sometimes | Planned for the future | Never/does not apply |
---|---|---|---|---|
Advocating for knowledge mobilisation by actively making the case for the value of research-based knowledge in policy and practice (n = 95) | 52% | 42% | 2% | 4% |
Including non-academic members on research project advisory boards (n = 96) | 44% | 33% | 3% | 20% |
Funding or conducting projects to advance the science of knowledge mobilisation (n = 96) | 25% | 39% | 9% | 27% |
Providing post-project funding for knowledge mobilisation activities (n = 97) | 11% | 30% | 3% | 56% |
The survey findings across all these categories show a wide range of activities are already in place across these agencies, with sometimes an additional large proportion of agencies identifying specific activities as targets for future action (e.g. webinars were under consideration for the future by nearly one-quarter of agencies). Other innovative areas were of less universal interest; for example, using the arts, providing local consultancy, providing post-project funding for knowledge mobilisation, arranging secondments or working with journalists were all ruled out by one-third or more of the agencies. The involvement of patients, service users and lay people in knowledge mobilisation activities was varied, with a greater proportion of agencies reporting their engagement ‘upstream’ (e.g. in prioritising research) rather than ‘downstream’ (e.g. interpreting and communicating findings).
Models and frameworks used in knowledge mobilisation
From our knowledge of the literature, we had listed around 25 models and frameworks that have been published and (to some extent) applied. Respondents were asked to indicate which of these models and frameworks their organisation had drawn on in developing knowledge mobilisation activities and were able to tick as many as applied (Table 13). It is interesting to note that around one-third of respondents (31%) did not answer this question. All but three of the models and frameworks listed were identified by at least 5% of respondents as having been used by their organisation in developing knowledge mobilisation activities, but few were cited by more than 20% of respondents. This latter group consisted of PDSA cycles (44% of respondents); the KTA cycle1 (38%); the Greenhalgh model for considering the diffusion of innovations in health service organisations31 (36%); push, pull, linkage and exchange (33%);10,53 the Institute for Healthcare Improvement (IHI) Model for Improvement49 (32%); Lavis et al. ’s framework for knowledge transfer55 (27%); and, the PARIHS Framework52 (25%).
Model or framework | Respondents, % |
---|---|
PDSA cycles | 44 |
The Knowledge to Action (KTA) Cycle (Graham et al. 20061) | 38 |
The Greenhalgh model for considering the diffusion of innovations in health service organisations (Greenhalgh et al. 200631) | 36 |
Push, pull, linkage and exchange (Lomas 2000;53 Lavis et al. 200610) | 33 |
The Institute for Healthcare Improvement (IHI) Model for Improvement (Langley 199649) | 32 |
Lavis et al.’s framework for knowledge transfer (five questions about the research, four potential audiences) (Lavis et al. 200355) | 27 |
The PARIHS Framework (Kitson et al. 199852) | 25 |
The Levin model of research knowledge mobilisation (Levin 200459) | 19 |
Normalization Process Theory (May et al. 200967) | 18 |
Mindlines (Gabbay and le May 200456) | 16 |
School Improvement Model (Education Endowment Foundation71) | 14 |
The Knowledge Integration model (Best et al. 200863) | 12 |
OMRU (Logan and Graham 199851) | 12 |
The Knowledge Exchange Framework (Contandriopoulos et al. 201021) | 11 |
The Consolidated Framework for Implementation Research (CFIR) (Damschroder et al. 200965) | 10 |
The three generations framework (Best et al. 200863) | 10 |
Walter et al.’s three models of research use (Walter et al. 200460) | 10 |
Ward et al.’s conceptual framework of the knowledge transfer process (Ward et al. 200919) | 10 |
Collaborative knowledge translation model (Baumbusch et al. 200861) | 7 |
The Interactive Systems Framework for Dissemination and Implementation (Wandersman et al. 200862) | 7 |
Participatory Action Knowledge Translation model (McWilliam et al. 200968) | 7 |
Knowledge Dissemination and Utilisation Framework (Farkas et al. 200354) | 5 |
Ward et al.’s revised knowledge exchange framework (Ward et al. 201237) | 5 |
The Critical Realism and the Art Research Utilization Model (CRARUM) (Kontos and Poland 200966) | 4 |
The knowledge translation self-assessment tool for research institute (SATORI) (Gholami et al. 201170) | 3 |
The National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) Knowledge to Action Framework (Wilson et al. 201169) | 3 |
Other (please state) | 23 |
Around one-third of the 73 respondents to this question also suggested additional models and frameworks and/or made other comments. Several respondents made references to models and frameworks developed in-house or referred to work by particular authors that had informed their organisation’s knowledge mobilisation activities (e.g. Weiss, Sebba, Nutley, van de Ven, Lomas, Kingdon, Sabatier). Some respondents referred to specific fields (e.g. implementation science) or concepts (e.g. communities of practice) that had informed their organisation’s work. Others commented that their organisation had not applied any specific models, or conversely that they took an eclectic approach and drew on many of them without formal attribution.
The range and diversity of the models and frameworks ‘name-checked’ by these agencies, and the free-text comments added, suggest that there is considerable scope for consolidation of conceptual matters in the field, and scope for further refinement (particularly around operationalisation) to ensure that models meet the needs of agencies.
Propositions for effective knowledge mobilisation
From our interview data and from the literature, we had identified a number of propositions about the nature of the knowledge mobilisation field and about what is needed to advance knowledge mobilisation practice. Respondents were invited to indicate their level of agreement with each, using a five-point Likert scale. In presenting the findings from this section of the survey we have grouped the propositions into four pragmatic categories based on their underlying themes:
-
terminology in use (four items)
-
theories and frameworks and their utility (three items)
-
activities and focus for effective knowledge mobilisation (five items)
-
the relationship between literature and practice in knowledge mobilisation (three items).
Terminology in use
There was general agreement across respondents (70%) that it is important to secure broad agreement on key terms before starting knowledge mobilisation activities (Table 14). Somewhat paradoxically, however, there was also broad agreement from nearly 60% of respondents that a plethora of terms around knowledge mobilisation is unavoidable, although around one-quarter of respondents neither agreed nor disagreed with this proposition. There was less consensus around the proposition that ‘knowledge mobilisation activities are distinct from quality improvement work’: while around half of respondents agreed with this statement, just over a quarter disagreed and one-fifth did not feel strongly either way. Similarly, although around half of respondents agreed that ‘knowledge mobilisation is distinct from implementation science’, around one-quarter disagreed and a further quarter did not feel strongly either way.
Response item | Strongly agree | Agree | Neither agree nor disagree | Disagree | Strongly disagree |
---|---|---|---|---|---|
Securing broad agreement on key terms is an important starting point in knowledge mobilisation activities (n = 92) | 16% | 54% | 21% | 8% | 1% |
A plethora of terms around knowledge mobilisation is unavoidable (n = 92) | 8% | 51% | 26% | 14% | 1% |
Knowledge mobilisation activities are distinct from quality improvement work (n = 90) | 22% | 31% | 20% | 26% | 1% |
Knowledge mobilisation is distinct from implementation science (n = 92) | 18% | 33% | 25% | 20% | 4% |
Theories and frameworks and their utility
There was strong consensus that ‘organisations need to use a range of knowledge mobilisation frameworks rather than just one’, with over 80% of respondents either agreeing or strongly agreeing with this proposition (Table 15). However, there was no consensus on whether or not existing frameworks are hard to operationalise: while almost half of respondents (47%) agreed that they were, a larger proportion (52%) either disagreed with this proposition (4%) or had no strong view either way (48%). Opinions were mixed on whether or not the lack of commonly accepted knowledge mobilisation frameworks hindered the development of knowledge mobilisation strategies: the proportion of respondents who agreed with this proposition (37%) was the same size as the proportion who had no strong view on it, while around one-quarter actively disagreed and thought that the lack of commonly accepted frameworks was not a hindrance to the development of knowledge mobilisation strategies.
Response item | Strongly agree | Agree | Neither agree nor disagree | Disagree | Strongly disagree |
---|---|---|---|---|---|
Organisations need to use a range of knowledge mobilisation models and frameworks rather than just one (n = 91) | 33% | 49% | 14% | 2% | 1% |
Many of the existing knowledge mobilisation frameworks are hard to operationalise (n = 89) | 8% | 39% | 48% | 3% | 1% |
The lack of commonly accepted knowledge mobilisation frameworks hinders the development of knowledge mobilisation strategies (n = 90) | 6% | 31% | 37% | 21% | 6% |
Activities and focus for effective knowledge mobilisation
There was strong consensus that effective knowledge mobilisation would be enhanced by developing the role of service-users/patients (87%), a stronger focus on more supportive organisational environments (85%) and a stronger emphasis on the active promotion of knowledge products rather than on their production alone (85%) (Table 16). The proposition that ‘Knowledge mobilisation activities need to be carefully targeted at particular bodies of knowledge’ received a mixed response, with over half of respondents in agreement but around one-third indicating that they neither agreed nor disagreed and almost 10% actively disagreeing. There were also mixed views about whether more emphasis should be placed on knowledge mobilisation at the organisation or multiorganisation level rather than at the practitioner level: although half of respondents agreed with this proposition, over 10% disagreed and around one-third did not feel strongly.
Response item | Strongly agree | Agree | Neither agree nor disagree | Disagree | Strongly disagree |
---|---|---|---|---|---|
The role of service users/patients in knowledge mobilisation is currently underdeveloped (n = 92) | 36% | 51% | 9% | 4% | 0% |
For effective and sustainable knowledge mobilisation we need to focus more on creating supportive organisational environments (n = 91) | 32% | 53% | 12% | 3% | 0% |
Effective knowledge mobilisation needs a stronger emphasis on the active promotion of knowledge products rather than on their production alone (n = 90) | 37% | 48% | 10% | 6% | 0% |
Knowledge mobilisation activities need to be carefully targeted at particular bodies of knowledge (n = 91) | 23% | 36% | 32% | 9% | 0% |
There is currently too much emphasis on knowledge mobilisation at the practitioner level and not enough at the organisation or multiorganisation level (n = 92) | 10% | 40% | 36% | 12% | 2% |
The relationship between literature and practice in knowledge mobilisation
There was broad agreement from around two-thirds of respondents that ‘the theory on knowledge mobilisation as set out in the literature is more advanced than the practice in organisations’ (69%), although around 10% of respondents actively disagreed with this statement (Table 17). One respondent added a comment to expand on their answer:
‘The theory is more advanced than the practice’ is a hard one to answer. I much agree that what we need is more practice, not more theory, which would suggest a response of Agree. But I don’t really agree that the theory is ‘advanced’. Far to the contrary, I find much of it to be hot air. What we need is better theory grounded in more and better practice.
Response item | Strongly agree | Agree | Neither agree nor disagree | Disagree | Strongly disagree |
---|---|---|---|---|---|
The theory on knowledge mobilisation as set out in the literature is more advanced than the practice in organisations (n = 91) | 23% | 46% | 21% | 7% | 3% |
The lack of evidence on the impact of knowledge mobilisation approaches is hindering development (n = 90) | 21% | 38% | 31% | 9% | 1% |
Organisations are still trying to make ‘linkage and exchange’ work but the literature has moved on to newer approaches (n = 90) | 10% | 27% | 57% | 6% | 1% |
Opinions were split over the question of whether or not the lack of evidence on the impact of knowledge mobilisation approaches was hindering development: over half of respondents agreed that it was (59%), but almost one-third (31%) had no strong view and 10% disagreed (see Table 17). The proposition that ‘Organisations are still trying to make “linkage and exchange” work but the literature has moved on to newer approaches’ was met with a largely neutral response: although just over one-third agreed that this was the case, a majority of respondents (64%) neither agreed nor disagreed (57%) or actively disagreed (7%).
Several respondents made comments to the effect that it was difficult to choose one option for some propositions because ‘it depends’:
As always it is difficult to answer ‘in general’ as it depends what knowledge is being mobilised, where, between whom and for what purpose . . . A case-by-case approach is often needed and the willingness to be flexible and respond iteratively.
I think some people get too caught up in the jargon and theory rather than the practicalities and the ‘doing’. KT is about communication, relationship building, change management, political manoeuvring, etc. It is also going to be different depending on who has the knowledge (or the need) and who they are trying to make [an] ‘exchange’ with. There are many potential interactions at multiple levels.
Some respondents had more fundamental objections to this section of the survey and commented that it was privileging a particular and ‘academic’ perspective on knowledge mobilisation:
As one who works in an organisation that constantly seeks to generate research evidence for policy and practice, and with a strong focus on dissemination, debate and exchange, it is hard to focus on this academic analysis of ‘the problem’. There is so much that researchers could do to engage with policy and practice, drawing on experience of the many academic units and foundations who do this every day. Beware intellectualising the issues – this of itself puts off policy makers and practitioners!
Several of the propositions in this section concerned factors that might be hindering development of the knowledge mobilisation field (e.g. the lack of commonly accepted frameworks or the lack of evidence of impact of knowledge mobilisation approaches). One respondent added a comment to qualify their answer, saying that it was not ‘lack of evidence on the impact of knowledge mobilisation approaches’ that was the problem but ‘lack of a richer, more sophisticated understanding of the mechanisms of action and mediators/moderators of the approaches’. Other respondents added suggestions of other factors that were hindering knowledge mobilisation:
What’s missing is marketing the impact that using research evidence can have for practitioners.
It’s hard to get people to buy into knowledge mobilization when the knowledge itself shifts, grows, and is contradicted. As this shift is inescapable, the root challenge is to help laypersons feel comfortable with some degree of ambiguity in the knowledge base, and to give them the tools to make some degree of assessment of new findings and conclusions for themselves, or to provide them with a reliable source of analysis of such new findings and conclusions.
Overall, with some exceptions, there was a good degree of consensus (fewer than 10% disagreeing) over the propositions presented (although large numbers often opted for the neutral stance of ‘neither agree nor disagree’). Areas where there was more marked disagreement were around the use of terminology and the availability and operationalisability of theories and frameworks. This reinforces the point made earlier on the need for conceptual consolidation, translation and communication (with due reference to the need to apply reflexively the lessons learnt from this project).
Key factors underpinning agencies’ knowledge mobilisation plans
This section of the survey gave respondents the following instruction:
This section looks at some aspects that your organisation might want to consider in developing knowledge mobilisation activities. For each aspect listed below, please indicate how important you think it is. When choosing an approach, what matters?
Respondents were presented with a list of 13 factors and asked to indicate how important each factor was. In presenting the findings from this section of the survey, we have grouped the 13 factors into three pragmatic categories:
-
factors relating to evidence or experience (four items)
-
factors relating to customisation (four items)
-
factors relating to fit with context (five items).
Few respondents used the ‘Do not know/does not apply’ option in this section (always fewer than 5%): this suggested that respondents had a degree of certainty about the factors that were important in developing a knowledge mobilisation strategy and about their relative importance. However, just over 10% of respondents did not answer this question at all.
Factors relating to evidence or experience
An overwhelming majority thought that it was important to have evidence supporting the knowledge mobilisation approach chosen (94%) and to be able to evaluate the approach in use (93%) (Table 18). One respondent commented that it was necessary to take a pluralistic approach to the definition of supporting ‘evidence’. Prior positive experience with a particular approach in that organisation was important to around three-quarters of respondents. However, there was less interest in whether or not similar organisations were using a particular approach: over one-third of respondents did not think that that was an important factor.
Response item | Very important | Fairly important | Not that important | Do not know/does not apply |
---|---|---|---|---|
That there is evidence to support this approach (n = 92) | 57% | 37% | 5% | 1% |
It is feasible to evaluate the approach in use (n = 92) | 38% | 55% | 5% | 1% |
We have used this approach previously with good results (n = 92) | 30% | 44% | 22% | 4% |
Similar organisations are using this approach (n = 92) | 11% | 50% | 37% | 2% |
Several respondents added comments about the risks of waiting for evidence to support an approach or of waiting for other organisations to adopt an approach first:
I worry that by waiting to see if other organisations are also doing the work . . . or that everything is tailored and well accepted . . . it will be too late! Facilities should start looking for key signals that are appropriate for them so they move more quickly. Part of our problem with adoption of best practices is everyone waiting around for double-blind studies to support every component – we have to move faster. These are all nice to have, but aren’t requirements to have.
The tension between an evidence-backed approach and one which is well-received by end-users can be difficult to manage, and is a major potential obstacle in effective knowledge mobilization. Some degree of compromise is always necessary, but from the perspective of an individual business, perfect evidence-backing must inevitably give way to practicality.
Factors relating to customisation
The four factors relating to customisation were seen as uncontroversial (Table 19), with all but a handful of respondents rating as important that the knowledge mobilisation approach was customised for the target audience(s), that it was tailored to the type of knowledge to be mobilised, that it took full account of the users’ organisational context/s and that it engaged the end-users of the research.
Response item | Very important | Fairly important | Not that important | Do not know/does not apply |
---|---|---|---|---|
The approach is customised for the target audience(s) (n = 91) | 63% | 33% | 1% | 3% |
The approach is tailored to the type of knowledge to be mobilised (n = 92) | 47% | 48% | 2% | 3% |
The approach takes full account of the users’ organisational context/s (n = 92) | 51% | 43% | 3% | 2% |
The approach engages the end-users of the research (n = 92) | 64% | 29% | 2% | 4% |
Factors relating to fit with context
Factors relating to fit with context were also seen by an overwhelming majority of respondents to be important (Table 20). In particular, over half of respondents (60%) thought that it was ‘very important’ that the approach was understood and accepted by key people in the organisation that was developing the knowledge mobilisation strategy. Views were slightly more mixed on the issue of whether or not the chosen approach tapped into formal and informal networks, with around 10% of respondents indicating that this factor was not that important.
Response item | Very important | Fairly important | Not that important | Do not know/does not apply |
---|---|---|---|---|
The approach is understood and accepted by key people in our organisation (n = 92) | 60% | 35% | 3% | 2% |
The approach is appropriate for our organisation’s context (n = 91) | 57% | 38% | 3% | 1% |
The approach makes good use of the perspectives of patients/service users (n = 92) | 45% | 46% | 5% | 4% |
The approach makes effective use of communication technologies (n = 92) | 36% | 53% | 5% | 5% |
The approach taps into existing formal and informal networks (n = 92) | 42% | 42% | 11% | 4% |
Most of the factors identified here were confirmed as being either ‘very’ or ‘fairly’ important by a large majority of respondents in choosing knowledge mobilisation activities. Only two stood out as being more controversial: prior experience with previously good results, and whether or not similar organisations were already using the approach. This suggests that the factors identified may form a useful checklist for agencies as they develop knowledge mobilisation innovation.
Evaluating knowledge mobilisation activities and impact
The final section of the online survey explored agencies’ experiences with evaluation of their knowledge mobilisation activities. Around one-quarter of respondents stated that ‘there is currently little or no formal evaluation of the organisation’s knowledge mobilisation activities’, while 60% of respondents stated that there was ‘some’ evaluation, and just 15% indicated that their organisation had ‘a comprehensive approach’ to evaluating their knowledge mobilisation activities. Some respondents used the free-text comments box to provide additional clarification, for example commenting on the difficulty of assessing impact, or emphasising that while their organisation did have extensive evaluation initiatives, these did not cover all of the relevant activities. One respondent commented on the need to augment the funder’s minimal reporting requirements:
Our UK research-agency funders currently expect only ‘process/throughput’ reporting on our KTE activities; we try to augment that with ‘thicker’ narratives about our experiences, but there is limited interest in reading those – so we tend to tell those stories in our presentations to varied national and international audiences.
Respondents were also invited to consider some of the types of impact that the organisation might want to consider in developing a formal evaluation of their knowledge mobilisation activities (Table 21). Most of the types of impact listed were reported by respondents to be either ‘very important’ or ‘fairly important’. ‘Identifiable evidence-informed policy or service change’ was the least controversial (99% support), with ‘impact on outcomes for service users’ attracting more mixed views: 14% of respondents thought that this was ‘not that important’ or chose the ‘do not know/does not apply’ option.
Response item | Very important | Fairly important | Not that important | Do not know/does not apply |
---|---|---|---|---|
Identifiable evidence-informed policy or service change (n = 91) | 70% | 29% | 1% | 0% |
Changes in research users’ behaviour or normal practice (n = 91) | 69% | 25% | 3% | 2% |
Changes in research users’ attitudes and intentions (n = 92) | 59% | 33% | 5% | 3% |
Process measures of research user engagement, e.g. website visits, downloads, attendance at events, etc. (n = 92) | 45% | 46% | 9% | 1% |
Increased awareness of the research evidence among potential research users (n = 92) | 51% | 38% | 9% | 2% |
Impact on outcomes for service users (n = 92) | 64% | 23% | 7% | 7% |
Additional comments by respondents revealed a range of concerns around evaluation (‘if only we could achieve these lofty aims!’), including the challenges of measuring impact, particularly over shorter time periods, and of isolating the influence of particular initiatives, activities or evidence on outcomes:
Re impact on outcomes for users: this is the most important – but we know it is too far beyond our own sphere of control to measure hence does not apply. Process measures are the easiest for us to capture and we do this as matter of course – but recognise that they don’t tell us the full or even the most important story. We also know from experience that measuring the impacts we feel are very important is very hard to do – and not only because of attribution issues – but these are questions we do seek answers to when we evaluate our programmes.
While outcomes for service users are the key element of practice development, it is often very difficult to establish the link between evidence and improvements in outcome, Too much focus on evidence that is easily linked to direct changes in outcomes can skew the evidence base and shut off potentially important avenues that have a more indirect or longer term impact on outcomes.
So, again, overall there appears to be a good deal of consensus over the importance of various aspects of impact as being (very or fairly) important. The challenge will be turning such clarity over goals into practical and effective projects to uncover the impacts over which importance is agreed.
Concluding remarks
The data from our web-based survey provide important extensions to the interview data. They paint a picture of diverse terminology, and fragmentation of theoretical thinking. Many of the models, theories and frameworks in the literature were seeing only limited application by agencies, and there was considerable doubt in the field whether convergence was possible or desirable. Notwithstanding this theoretical diversity, there is a very wide array of activities already in play, and some emerging consensus as to many of the propositions that had emerged from the interview work. There were also areas of agreement on many of the factors that underpin successful knowledge mobilisation. These data, then, provide an important foundation on which agencies can build: exploring divergence and examining the implications of emerging consensus. They may also help to bring about better articulation between theoretical concerns and practical application. The following chapter attempts some of this bridging work by exploring patterns of agency practice in the light of the theoretical thinking uncovered by our review work.
Chapter 6 Archetypes of practice in knowledge mobilisation
Introduction
In Chapter 3 we read across 71 reviews of knowledge mobilisation, and explored the growing set of models and frameworks that have emerged, to create a conceptual map of the issues and challenges facing agencies developing knowledge mobilisation strategies. Chapter 4 provided in-depth accounts from these agencies of the strategies that they have developed, the reasons underlying these strategies, and the learning from evaluation work. Finally, Chapter 5 took a number of observations and propositions from the earlier phases of the work and tests these for wider consensus on a broader sample.
Building and integrating across the preceding three chapters, we now present some integrative observations of the knowledge mobilisation work that we have seen in our agencies and the thinking that has underpinned that work. Working inductively from our various sources of data (websites review, formal literature, grey literature, depth interviews and web survey), we have derived a number of key archetypes (eight in total) that can be seen to underpin the practices of the agencies with which we engaged.
This chapter presents an account of these archetypes, the reasoning behind them, and the potential uses to which they may be put. Workshop discussion with agencies and other interested parties late in the project suggested that these accounts may provide powerful tools to clarify and communicate agency strategies, activities and reasoning.
The nature of archetypes
Archetypes may be thought of as idealised types or configurations of agencies (i.e. not necessarily actual or real). They provide accounts of an idealised agency that can be used as interpretive heuristics, allowing us to assemble and interpret observations. ‘Idealised’ here contains no normative intent: it draws attention to the potential for creating basic building blocks from which the strategies of actual agencies may be assembled or be seen to be composed.
The focus of this study was on understanding the nature of the innovative activities in the field of knowledge mobilisation as carried out by three major types of agency: large research producers, intermediary agencies and major funders. We were not concerned to map the numbers and types of these agencies across sectors (e.g. health care or social care). Nor were we primarily interested in assessing the extent (depth and breadth) of agency activities across that landscape. To the extent that we did either of these two things it was as a means of exploring deeper processes. Our aim instead was to understand how and why innovation in knowledge mobilisation was created, underpinned and sustained.
It is important to be clear, therefore, that in creating archetypes we were not attempting to develop either a taxonomy or a classification of agencies: these are different tasks that have, in part, been addressed by others. 11,25,130 In this project, we were looking deeper to try to understand the basic architecture from which agencies and their portfolio of knowledge mobilisation activities are assembled. It is to this end that we began to see in our data a number of patterns of practice – bundles of assumptions, actions, configurations and rationales – that recurred in the data. We called these repeated patterns ‘archetypes’.
The literature review work set out in Chapter 3 gave us obvious reference points for mapping agency activities and rationales. The six domains of the conceptual map, and the articulations of key debates, choices and tensions within these, provided anchor points for describing what we saw in agencies in the form of archetypes.
Eight emergent archetypes
Eight archetypes emerged inductively from our data on repeated reading of interview transcripts and an assessment of these in the light of the structured literature reviewing we had undertaken. In discussions across the research team, with our advisory board and in the participatory workshops we began to flesh out and name these archetypes.
As would be expected from archetypes, there is considerable overlap between them on many of the six domains of the conceptual map (this contrasts with, for example, a multidimensional approach to classification which would have sought discrete and mutually exclusive accounts). Indeed, some of the archetypes are so similar, differing on perhaps just one key aspect, that they form natural pairs (these are identified below):
-
archetype A: producing knowledge (product push)
-
archetypes B and C: brokering and intermediation (own research; wider research)
-
archetype D: advocating evidence (proselytisers for an evidence-informed world)
-
archetypes E and F: researching practice (research into practice; research in practice)
-
archetype G: fostering networks (building on existing networks; developing new ones)
-
archetype H: advancing knowledge mobilisation (building knowledge about knowledge and knowing).
Each of these archetypes is summarised in Table 22 using the six conceptual map domains; and each is now elaborated a little further. In reading these archetype accounts it should be borne in mind that we are not asserting that any given agency would necessarily match the archetype described: all actual agencies are likely to be a complex and shifting mix of these archetype underpinnings.
Archetype | Working name | Relationship to conceptual map domains | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A | Producing knowledge | Knowledge typesExplicit knowledge; codified knowledge; theoretical knowledge; empirical knowledgeActions and resourcesProduction; disseminationPurpose and goalsKnowledge-driven; problem-solving; often instrumental, but some conceptual, political or tactical uses may be seen (although not the primary purpose)Connections and configurationsMostly linear models (push and pull)People and rolesMaterials aimed at practitioners, managers and policy-makersContextAttention to context is limited but focus would be on external context | Knowledge types | Explicit knowledge; codified knowledge; theoretical knowledge; empirical knowledge | Actions and resources | Production; dissemination | Purpose and goals | Knowledge-driven; problem-solving; often instrumental, but some conceptual, political or tactical uses may be seen (although not the primary purpose) | Connections and configurations | Mostly linear models (push and pull) | People and roles | Materials aimed at practitioners, managers and policy-makers | Context | Attention to context is limited but focus would be on external context |
Knowledge types | Explicit knowledge; codified knowledge; theoretical knowledge; empirical knowledge | |||||||||||||
Actions and resources | Production; dissemination | |||||||||||||
Purpose and goals | Knowledge-driven; problem-solving; often instrumental, but some conceptual, political or tactical uses may be seen (although not the primary purpose) | |||||||||||||
Connections and configurations | Mostly linear models (push and pull) | |||||||||||||
People and roles | Materials aimed at practitioners, managers and policy-makers | |||||||||||||
Context | Attention to context is limited but focus would be on external context | |||||||||||||
B | Brokering own research to policy-makers/practitioners | Knowledge typesExplicit knowledge, produced externally to point of useActions and resourcesDissemination; training and education; interactionPurpose and goalsKnowledge-driven; problem-solving; sometimes interactive; sometimes enlightenment and/or conceptual use. Political use may be seen (but is not the primary purpose)Connections and configurationsLinear (mostly push) and some relationship modelsPeople and rolesResearchers and policy-makers or practitioners seen as central; some intermediary rolesContextMost emphasis on external context | Knowledge types | Explicit knowledge, produced externally to point of use | Actions and resources | Dissemination; training and education; interaction | Purpose and goals | Knowledge-driven; problem-solving; sometimes interactive; sometimes enlightenment and/or conceptual use. Political use may be seen (but is not the primary purpose) | Connections and configurations | Linear (mostly push) and some relationship models | People and roles | Researchers and policy-makers or practitioners seen as central; some intermediary roles | Context | Most emphasis on external context |
Knowledge types | Explicit knowledge, produced externally to point of use | |||||||||||||
Actions and resources | Dissemination; training and education; interaction | |||||||||||||
Purpose and goals | Knowledge-driven; problem-solving; sometimes interactive; sometimes enlightenment and/or conceptual use. Political use may be seen (but is not the primary purpose) | |||||||||||||
Connections and configurations | Linear (mostly push) and some relationship models | |||||||||||||
People and roles | Researchers and policy-makers or practitioners seen as central; some intermediary roles | |||||||||||||
Context | Most emphasis on external context | |||||||||||||
C | Brokering wider research to policy-makers/practitioners | Knowledge typesExplicit knowledge; diverse sources and kindsActions and resourcesDissemination; training and education; interactionPurpose and goalsKnowledge-driven; problem-solving; sometimes interactive; sometimes enlightenment and/or conceptual use. Political use may be seen (but is not the primary purpose)Connections and configurationsSome linear (push), more emphasis on relationship models (linkage and exchange)People and rolesResearchers and policy-makers or practitioners seen as central; some intermediary rolesContextMost emphasis on external context | Knowledge types | Explicit knowledge; diverse sources and kinds | Actions and resources | Dissemination; training and education; interaction | Purpose and goals | Knowledge-driven; problem-solving; sometimes interactive; sometimes enlightenment and/or conceptual use. Political use may be seen (but is not the primary purpose) | Connections and configurations | Some linear (push), more emphasis on relationship models (linkage and exchange) | People and roles | Researchers and policy-makers or practitioners seen as central; some intermediary roles | Context | Most emphasis on external context |
Knowledge types | Explicit knowledge; diverse sources and kinds | |||||||||||||
Actions and resources | Dissemination; training and education; interaction | |||||||||||||
Purpose and goals | Knowledge-driven; problem-solving; sometimes interactive; sometimes enlightenment and/or conceptual use. Political use may be seen (but is not the primary purpose) | |||||||||||||
Connections and configurations | Some linear (push), more emphasis on relationship models (linkage and exchange) | |||||||||||||
People and roles | Researchers and policy-makers or practitioners seen as central; some intermediary roles | |||||||||||||
Context | Most emphasis on external context | |||||||||||||
D | Advocating for the use of evidence | Knowledge typesTheoretical and empirical knowledgeActions and resourcesTraining and education; interaction; social influence; incentives and reinforcementsPurpose and goalsEmphasises the problem-solving model and enlightenment and conceptual useConnections and configurationsRelationship models; some interest in systems modelsPeople and rolesResearchers, intermediaries and policy-makers seen as centralContextMain focus is on external context | Knowledge types | Theoretical and empirical knowledge | Actions and resources | Training and education; interaction; social influence; incentives and reinforcements | Purpose and goals | Emphasises the problem-solving model and enlightenment and conceptual use | Connections and configurations | Relationship models; some interest in systems models | People and roles | Researchers, intermediaries and policy-makers seen as central | Context | Main focus is on external context |
Knowledge types | Theoretical and empirical knowledge | |||||||||||||
Actions and resources | Training and education; interaction; social influence; incentives and reinforcements | |||||||||||||
Purpose and goals | Emphasises the problem-solving model and enlightenment and conceptual use | |||||||||||||
Connections and configurations | Relationship models; some interest in systems models | |||||||||||||
People and roles | Researchers, intermediaries and policy-makers seen as central | |||||||||||||
Context | Main focus is on external context | |||||||||||||
E | Facilitating implementation of instrumental evidence by helping organisations with the change management process | Knowledge typesEmphasis on explicit knowledge and on research knowledge produced outside the organisation, although will include other types and local knowledgeActions and resourcesMechanisms: dissemination; interaction; social influence; facilitation; incentives and reinforcementsPurpose and goalsDirect change through project implementation (focus on process measures); some problem-solving and interactivityConnections and configurationsRelationship models and some emphasis on systems modelsPeople and rolesResearchers, managers and practitioners (and possibly public and service users) seen as centralContextPays particular attention to internal context of service delivery organisations | Knowledge types | Emphasis on explicit knowledge and on research knowledge produced outside the organisation, although will include other types and local knowledge | Actions and resources | Mechanisms: dissemination; interaction; social influence; facilitation; incentives and reinforcements | Purpose and goals | Direct change through project implementation (focus on process measures); some problem-solving and interactivity | Connections and configurations | Relationship models and some emphasis on systems models | People and roles | Researchers, managers and practitioners (and possibly public and service users) seen as central | Context | Pays particular attention to internal context of service delivery organisations |
Knowledge types | Emphasis on explicit knowledge and on research knowledge produced outside the organisation, although will include other types and local knowledge | |||||||||||||
Actions and resources | Mechanisms: dissemination; interaction; social influence; facilitation; incentives and reinforcements | |||||||||||||
Purpose and goals | Direct change through project implementation (focus on process measures); some problem-solving and interactivity | |||||||||||||
Connections and configurations | Relationship models and some emphasis on systems models | |||||||||||||
People and roles | Researchers, managers and practitioners (and possibly public and service users) seen as central | |||||||||||||
Context | Pays particular attention to internal context of service delivery organisations | |||||||||||||
F | Research and implementation combined; a focus on local learning, co-production and bringing together all stakeholders | Knowledge typesBroad, inclusive, range of types. Includes research knowledge produced locallyActions and resourcesAll mechanisms in use, especially interaction; social influence; facilitation; incentives and reinforcementsPurpose and goalsKnowledge-driven; problem-solving; interactive use. Aims at shaping a wide range of outcomesConnections and configurationsRelationship models; systems modelsPeople and rolesResearchers, practitioners and managers seen as central; possibly public and service usersContextPays attention to internal and external context | Knowledge types | Broad, inclusive, range of types. Includes research knowledge produced locally | Actions and resources | All mechanisms in use, especially interaction; social influence; facilitation; incentives and reinforcements | Purpose and goals | Knowledge-driven; problem-solving; interactive use. Aims at shaping a wide range of outcomes | Connections and configurations | Relationship models; systems models | People and roles | Researchers, practitioners and managers seen as central; possibly public and service users | Context | Pays attention to internal and external context |
Knowledge types | Broad, inclusive, range of types. Includes research knowledge produced locally | |||||||||||||
Actions and resources | All mechanisms in use, especially interaction; social influence; facilitation; incentives and reinforcements | |||||||||||||
Purpose and goals | Knowledge-driven; problem-solving; interactive use. Aims at shaping a wide range of outcomes | |||||||||||||
Connections and configurations | Relationship models; systems models | |||||||||||||
People and roles | Researchers, practitioners and managers seen as central; possibly public and service users | |||||||||||||
Context | Pays attention to internal and external context | |||||||||||||
G | Facilitating collaborations and networks around research evidence | Knowledge typesKnowledge produced externally and within the organisation; multiple types of knowledge with emphasis on explicit and actionable knowledge, but attention also paid to tacit knowledgeActions and resourcesProduction; dissemination; training and education; interaction; social influence; facilitationPurpose and goalsKnowledge-driven; problem-solving; interactive useConnections and configurationsRelationship modelsPeople and rolesResearchers, practitioners and policy-makers seen as centralContextEmphasis on external and internal contexts | Knowledge types | Knowledge produced externally and within the organisation; multiple types of knowledge with emphasis on explicit and actionable knowledge, but attention also paid to tacit knowledge | Actions and resources | Production; dissemination; training and education; interaction; social influence; facilitation | Purpose and goals | Knowledge-driven; problem-solving; interactive use | Connections and configurations | Relationship models | People and roles | Researchers, practitioners and policy-makers seen as central | Context | Emphasis on external and internal contexts |
Knowledge types | Knowledge produced externally and within the organisation; multiple types of knowledge with emphasis on explicit and actionable knowledge, but attention also paid to tacit knowledge | |||||||||||||
Actions and resources | Production; dissemination; training and education; interaction; social influence; facilitation | |||||||||||||
Purpose and goals | Knowledge-driven; problem-solving; interactive use | |||||||||||||
Connections and configurations | Relationship models | |||||||||||||
People and roles | Researchers, practitioners and policy-makers seen as central | |||||||||||||
Context | Emphasis on external and internal contexts | |||||||||||||
H | Advancing the field of knowledge mobilisation | Knowledge typesTheoretical; empiricalActions and resourcesProduction; dissemination; incentives and reinforcementsPurpose and goalsEnlightenment; conceptual useConnections and configurationsIn decreasing emphasis: linear, relational and systems modelsPeople and rolesResearchers seen as centralContextMain emphasis on external context | Knowledge types | Theoretical; empirical | Actions and resources | Production; dissemination; incentives and reinforcements | Purpose and goals | Enlightenment; conceptual use | Connections and configurations | In decreasing emphasis: linear, relational and systems models | People and roles | Researchers seen as central | Context | Main emphasis on external context |
Knowledge types | Theoretical; empirical | |||||||||||||
Actions and resources | Production; dissemination; incentives and reinforcements | |||||||||||||
Purpose and goals | Enlightenment; conceptual use | |||||||||||||
Connections and configurations | In decreasing emphasis: linear, relational and systems models | |||||||||||||
People and roles | Researchers seen as central | |||||||||||||
Context | Main emphasis on external context |
Archetype A: producing knowledge
As Chapter 4 demonstrated, many of the agencies we engaged with emphasised the production of research-based knowledge ‘products’. These included systematic reviews, research summaries, ‘mythbusters’, web portals and the like. Even when agencies saw the importance of more interactive and socially situated approaches, they frequently experienced a pull back to knowledge production, collation and synthesis activities. Such approaches emphasise the value of explicit and codified knowledge, and naturally lead to distinct sets of activities and investments that enable the production process (see Table 22). Thus, the idea of archetype A (produce and share) emerged naturally from our data and could be seen to underpin – explicitly or implicitly – many agency strategies.
Archetypes B and C: brokering and intermediation (own research; wider research)
The rise of more interactive and relational models for research use has been mirrored by changes in the emphasis of agencies in how they seek to share research. Many have sought to go beyond ‘produce and share’ to create more interactive spaces where different kinds of knowledge and expertise can interact, be exchanged, be integrated and/or be transformed. Such approaches may have some inevitable ‘push’ but also seek to create ‘pull’, and are clearly informed by the ideas of ‘linkage and exchange’ (see Table 22).
Two different kinds of emphases were seen here in our data. First, activities that sought to broker (primarily) new and local research to a wide variety of stakeholders (often with an emphasis on policy-makers). Such actions were thus often focused on the flow of new research and its promulgation. Second, activities that sought to embrace the wider bodies of research available on any given issue; work that focused on the stocks of existing research-based data, whether locally produced or distal in time and place. These observations gave rise to a pair of archetypes, archetypes B and C, which reflect brokering agency activities focused on the new and the local or much more widely cast.
Archetype D: advocating evidence (proselytisers for an evidence-informed world)
As linkage and exchange has become mainstream, and knowledge application and use is seen as heavily contextual, problem-driven and socially situated, so systems thinking has come to permeate some of the discourses underpinning knowledge mobilisation. Evidence advocates proselytise for a greater role for research-based knowledge, and seek the necessary infrastructural, organisational and cultural changes that might facilitate that role. In this way of thinking, interaction is central, use of research may be as much conceptual (‘enlightenment’) as instrumental, and shaping the wider context becomes a central focus (see Table 22). Archetype D, therefore, represents this aspect of agency work around creating the right knowledge context that is properly cognisant of the social and organisational complexity of the arena where influence is sought.
Archetypes E and F: researching practice (research into practice; research in practice)
Many of the agencies with which we engaged emphasised the need to ‘roll up the sleeves’ and provide hands-on support for local implementation, developing networks and building local absorptive capacities. Such work emphasises different aspects of the conceptual map (see Table 22) and ‘resolves’ key tensions in particular ways. Again, here we could discern two rather different emphases that make up archetypes E and F. First, an emphasis on improving practice through the application of research knowledge produced outside the organisation where change is being sought (archetype E). Often, such approaches emphasise more explicit knowledge and ideas of knowledge transfer, but the adaption and adoption of that knowledge may still include consideration of local understandings and contingencies. The second emphasis (archetype F) highlights local learning and absorptive capacity development, as well as locally produced, often co-produced, research knowledge. Both of these archetypes blur the distinction between the roles of a knowledge mobilising agency and the normal functions of a service delivery organisation, such as organisation development, service improvement and organisational learning.
Archetype G: fostering networks (existing or new)
As our review of reviews showed, understanding has grown of the socially situated way in which research contributes to knowledge, is melded with existing tacit knowledge, is moulded by an understanding of local preoccupations and contingencies, and is actioned in complex and ‘political’ environments. In response to these perspectives on situated knowing, agencies have often sought to create, develop or mould collaborations and network that shape and share expertise, and to increase the role that research-based knowledge can play in these networks. Archetype G, therefore, highlights the assumptions and preoccupations that go into this kind of work (see Table 22).
Archetype H: advancing knowledge mobilisation (building knowledge about knowledge and knowing)
Addressing the irony that much of the practice in knowledge mobilisation is not yet underpinned by either a coherent body of theorising or extensive empirical evaluation, some agency work is aimed at addressing these deficiencies and lacunae. Archetype H, therefore, addresses those aspects of knowledge mobilisation practice that are about refining the field, building shared understanding and committing to further empirical study. This archetype reflects a reflexive application of ‘knowledge about knowing’ to the knowledge mobilisation field.
Discussion
The eight archetypes identified and outlined above and in Table 22 are presented as one way of understanding the preoccupations and emphases that we saw across different agencies in the field. We do not intend these accounts to be seen as definitive or – worse – as classification categories into which actual agencies can be slotted. Our aim was to show the breadth of knowledge mobilisation approaches observed, not to map individual agencies on to particular archetypes. Thus, our archetypes are idealised types that (may) help to explain the diversity and dynamics of knowledge mobilisation in practice. Real organisations rarely display all of the features of ideal types like our archetypes: instead, they are much more likely to show different features to varying degrees. 196
In producing these accounts we neither make nor imply any normative intent. We would resist attempts to create hierarchies of these accounts, or attempts to suggest which are best or better, even if such suggestions were made heavily contingent. The patterns of practice contained in the archetypes present different challenges, have different strengths and are likely to be appropriate in different contexts. 196 As constructed, the archetypes are intended as descriptive and interpretive heuristics. In common with the recent use of archetypes to illuminate knowledge translation practices in the nine first-round CLAHRCs,196 their merit lies in their capacity to aid transparency, and to reveal deeper strands of thinking that, consciously or otherwise, underpin agency activities. 197 We also hope that these archetypes will stimulate communication and debate around agency orientations, design, strategies and priorities. It is our belief that finding ways to make explicit what is often tacit about knowledge mobilisation will aid both theory and practice.
A degree of respondent validation of these archetypes is both possible and desirable, but was beyond the scope of this project. For example, there are risks that we may have been too swayed by a theoretical literature that was, to some extent, sidelined by many of the agencies we spoke with. Nonetheless, as currently presented, the archetypes do have the merits of being both theoretically informed and strongly empirically driven. 197
The archetypes can be used to explore the existing mix of activities in any agency, or indeed across a mix of agencies. Such an analysis could be extended longitudinally, to examine changes over time and the reasons for these. In addition, it would be possible to explore with agencies the degree of coherence or incongruence across the archetypes, and the implications of these for agency activities, future strategies and stakeholder perspectives. Participants at the second workshop suggested that none of the archetypes were inherently in conflict with each other but that there might be resource issues and challenges in reconciling competing priorities (e.g. if some archetypes/activities were seen as higher status).
This project had a particular and determined focus on agencies, partly because of the historical neglect of considerations of actions in these contexts, and partly because our view is that it is (for now) the energies and ingenuity of agency activity that will drive better sharing of research. However, this agency perspective also has limitations in that it addresses the interests and concerns of only one set of actors in the system (agencies of the types that we reviewed: producers; intermediaries and funders). This leaves unaddressed an analysis of the system-wide risks and missed opportunities. An emphasis on agencies, for example, can obscure the need for an interagency and system-wide focus. It may underplay the importance and dynamics of service delivery organisation design, staffing, training, regulation, etc. It also leaves somewhat sidelined any analysis of the research infrastructure and its fitness for purpose. 198 In addition, broader political concerns and the role of the media are, similarly, not fully visible in the archetypes.
Notwithstanding these concerns, it is our belief that these archetypes are underpinned by an empirical reality as described in earlier chapters. As a result they have an inherent validity and can help us to make sense of the complex and diverse agency accounts set out in Chapter 4. Further elaboration is desirable, with one key area to be addressed being the nature of ‘success’ within the terms of each of the archetypes described.
Finally, in developing the archetypes as communicative or analytic tools for agencies, some translational work may be required to aid intelligibility and acceptability. A recurrent theme from our fieldwork was the somewhat limited connect between a burgeoning and increasingly sophisticated literature and the growing pragmatism and experientially led nature of agency strategies and actions. We have no wish to contribute to this gulf but seek ways to bridge it. Careful reframing and rephrasing of the archetypes may be required before they can become effective tools for agencies.
Concluding remarks
The emergence of archetypes from across our data gathering provided a new and unexpected perspective on the role and work of knowledge mobilising agencies. These help to bridge the theoretical concerns of Chapter 3 and the empirical realities exposed in Chapters 4 and 5. The following, concluding chapter of this report aims to further integrate the findings from the project as a whole, and to draw out some of their implications.
Chapter 7 Discussion
Introduction
This project set out to explore the linkages between an increasingly sophisticated literature on knowledge and knowing, and the practices of knowledge mobilisation. Our focus throughout has been on the knowledge mobilisation practices of three different kinds of ‘agency’: major research producers, intermediary agencies and large research funders. Sometimes these agencies had (and have) a distinct organisational identity (e.g. not-for-profit ‘think tanks’ or specific research funders), while at other times the ‘agency’ was either a subunit of a larger organisation (such as part of a university or health-care organisation) or virtual (e.g. a network of collaborators, albeit one with a distinct identity). While we focused on agencies (not individuals, and not the system), we primarily used this agency focus to allow us to explore the activities, innovations and rationales for knowledge mobilisation within those agencies. That is, the agencies per se were not our key interest: it was what those agencies were doing and how they were thinking that were central.
Throughout we have used the term ‘knowledge mobilisation’ to encompass the broadest possible set of activities that involve sharing research. This was a pragmatic choice and was not intended to short-circuit the real, necessary and pervasive debates over terminology. For the most part, the knowledge-literate study participants with whom we engaged acknowledged this as a reasonable pragmatic choice.
Reflections on method
The multimethod, multiphase approach adopted provided us with rich and complementary data. Thorough review of agency websites and consultation with sector experts enabled us to select those agencies where there was the greatest innovation and scale of activity for more detailed investigation. The review of reviews led to a conceptual map that helped to shape and organise our thinking. In-depth interviews across 51 agencies provided full and nuanced accounts of not just activities within those agencies but also the important thinking that lay behind such developments. The web-based survey added further breadth and insight. Throughout, we were guided by an international and expert advisory board, and we also, at times, drew on our own wider networks for guidance (e.g. in ensuring that we were not missing relevant agencies).
Despite the care taken to create a multifaceted view of knowledge mobilisation work, there are still some limitations to what we have done. First, we have focused primarily on health-care-related agencies (appropriately, given the NIHR funding source), and although our scope was international we have had only limited access to non-English speaking agencies (because of our own language limitations). This health-care focus has been augmented by investigations in social care and education (the latter with a specific focus on school-age education), but here we specifically limited ourselves to UK agencies for reasons of practicability and manageability. Second, in engaging with agencies, we usually spoke in-depth to only a single, albeit key, individual, and so questions must arise as to the extent to which that individual could encompass the richness of thinking and history that underpinned significant agency activity. Nonetheless, in general, we did not discern any significant degree of difficulty here: almost always, participants seemed able to offer detailed and nuanced accounts. This in turn raises the question of the ‘social desirability’ of the accounts that we were being given. Here, we were pleasantly surprised by the willingness and candour of participants to share their uncertainties, but of course we can make no claim that these accounts are either complete or ‘accurate’. Nonetheless, the accounts given do provide insights into the discourse of a highly relevant and deeply engaged group of knowledge mobilisation practitioners. In that sense then, our informants were capable of shedding significant light on these aspects of the knowledge-sharing world. Third, the response rate to the web survey was slightly lower (at 57%) than the response rate of over 70% that we had hoped for. We had based this expected response rate on the team’s previous experience of conducting surveys. However, we are aware that a recent meta-analysis of surveys of health professionals199 suggests that survey response rates are typically in the region of 50% and can be much lower; we are, therefore, satisfied that the response rate for the current survey is in line with the norm. It is, therefore, unlikely that this survey suffered unduly from non-response bias.
We set out the key research objectives and specific RQs of the project at the end of Chapter 1. In brief, these were mapping the knowledge mobilisation landscape; understanding the theoretical and other underpinnings of the strategies and approaches in use by agencies; and learning from the success, or otherwise, of agency activities. While the preceding chapters have already offered a degree of discussion of the data collected, here we return to these guiding research aims to provide some integrative commentary on the findings.
Mapping the knowledge mobilisation landscape
We found a wide range of knowledge mobilisation practices in our agencies, ranging from the standard and commonplace to the highly unusual and innovative. The distinctive patterns of practice within agencies created significant dissonances, tensions and trade-offs.
Innovative practice in knowledge mobilisation
We described in detail in Chapter 4 (interview data) the knowledge mobilisation strategies developed by the agencies in our study. Although we had purposefully selected the interview agencies on the basis of the scale and degree of innovation of their knowledge mobilisation activities, we found considerable variation between agencies in the extent to which knowledge mobilisation was central to their activities. A recent study of the websites of health research funding agencies in six countries25 found considerable variation in the prominence given to knowledge mobilisation work in the organisations’ structures. In our study, some agencies had at their core the need to encourage the facilitation of all of the research that they produced or funded, while for others knowledge mobilisation activities were part of only some of their research projects. When we asked interviewees to describe how they saw their organisation’s role in relation to knowledge mobilisation, the resulting descriptions could be clustered into three broad categories: roles that emphasised research products, roles that emphasised brokering and roles that emphasised fostering implementation of research findings. The role categories were not mutually exclusive and did overlap within individual agencies.
We found that agencies were generally reluctant to claim the label ‘innovative’ and that there was some debate about how innovation should be defined in the knowledge mobilisation field. Nevertheless, we found that many agencies were seeking to develop new ways of working around sharing research that they had not used before. For those roles that emphasised research-based ‘products’, a range of ‘push’ activities was described together with, in some cases, activities aimed at encouraging research user ‘pull’. In developing these products, many agencies worked with their target audiences (e.g. managers, practitioners or policy-makers) to ensure that the format, language and blend of research evidence with other forms of knowledge were appropriate for the potential users. Several interviewees described how such discussions were often an important end in themselves, in enabling policy-makers or practitioners to reach a better understanding of their underlying concerns or questions, in fostering greater understanding of the research process and the limitations of the research base, and in developing relationships of trust between researchers, policy-makers and practitioners. Although some agencies were continuing to use more traditional methods such as producing guidelines or reports, interviewees also described a range of more interactive and creative approaches to sharing research findings, including videos, animations, drama and ‘artist in residence’ posts.
Our survey data augmented these accounts by showing that almost all agencies were producing publications, other written materials or tools for research users. Webinars for policy-makers or practitioners were also increasingly being used, whereas using the arts (e.g. drama, narrative or visual arts) to communicate research findings was rare. Fewer than one-third of survey respondents often used social media to create debate, although over one-third sometimes did.
For some agencies that we interviewed, brokering was the main focus of their approach to knowledge mobilisation. In some cases it was the organisation itself that was the knowledge broker; other agencies employed individuals in knowledge brokering and boundary spanning roles. Agencies were seeking to facilitate and support networks of practitioners, managers and researchers, and to develop a range of events aimed at particular audiences (e.g. government minsters, members of the public or practitioners) to create new dialogues around research. Substantial initiatives included the creation of a media centre to improve media reporting by bringing journalists and researchers together and the development of a brokered ‘rapid response’ advisory service so that practitioners and managers could ask questions of the research centre. However, the survey data suggested that this was an unusual approach: only 15% of the survey agencies often brokered connections between researchers and journalists, and one-third of organisations never did so. In contrast, almost all agencies in the survey organised events and facilitated networks to bring researchers, policy-makers and practitioners together, and around four-fifths of agencies fostered formal partnerships between universities and non-university organisations. Around two-thirds of agencies employed staff in dedicated intermediary roles (e.g. as knowledge brokers) but fewer agencies arranged secondments of their staff into other organisations or vice versa.
The third cluster of activities we observed was around the implementation of research evidence: agencies worked alongside practitioners in health care, social care or education to enable them to review their services and to use a range of tools and approaches from quality improvement and implementation science to adopt and evaluate new practices in that setting. Such roles typically blended capacity building (e.g. training, secondments or fellowships) with research co-production, the provision of practical tools for research implementation and activities to develop local champions and trainers. Here, the survey data suggest that such activities were relatively common: 55% of respondent agencies indicated that they often carried out activities to support implementation, with a further 38% indicating that the organisation sometimes did so. Moreover, around one-quarter of the respondent agencies stated that they often used participatory research methods (including action research or facilitated implementation) and around 40% of agencies indicated that developing local collaborations for innovation and improvement was a common activity for them.
Our literature work (see Chapter 3) provided a conceptual map that helped us to identify points of similarity and difference between agencies in their approaches to knowledge mobilisation. While there were some areas of emergent consensus (e.g. the need for greater interactivity; awareness of audience needs; the importance of attending to context; and the importance of evaluating knowledge mobilisation activities), there were also evident differences between agencies (and even within agencies) in the approaches taken. These differences were not always apparent to the agencies themselves. There was a fair degree of ignorance about what other agencies in the same field were doing in knowledge mobilisation, and several interviewees and workshop participants expressed regret that there were no established mechanisms for learning from similar agencies. Moreover, at the heart of many agencies’ portfolios of knowledge mobilisation work lie unresolved tensions, trade-offs and ambiguities. As such, while some portfolios of work within agencies appeared coherent, with complementary (and often synergistic) activities, others contained some inherent contradictions (e.g. in whether or not ‘knowledge’ was treated as situated or portable). At times, our informants were aware of and were wrestling with these contradictions; at other times, it seemed that such awareness was limited.
Tensions and trade-offs in knowledge mobilisation
Despite widespread understanding about ideas of linkage and exchange, many agencies experience a strong pull back to the creation of knowledge products, with a strong emphasis on the rigours of the underlying research base and the credibility of the evidence sources. This rigour–relevance tension lies largely unresolved at the heart of many debates within agencies, and many agencies seem to struggle to break free from ‘push’ dominated activities. Moreover, even as the literature moves on from ‘relational’ models of knowledge mobilisation to embrace ideas of ‘systems’ perspectives,20,27 agencies still struggle to make real and sustain effective ‘linkage and exchange’. 25
Even within ‘push’ activities, tensions are apparent around the extent to which knowledge is treated as situated or portable. This is reflected in how the lessons from research are couched, the degree to which these messages are tailored in collaboration with different potential audiences and the extent to which any bespoke follow-on translation services are offered. There are also tensions and trade-offs when agencies facilitate the mobilisation of practitioner-based knowledge alongside research-based knowledge, for example by providing practitioners with a platform for sharing lessons from practice. These tensions include debates around the standards of evidence that should be employed for different types of advice.
The lessons of more engaged and facilitative work to support research use have been well taken by many agencies, for example supporting implementation in practice or working alongside policy-makers in hybrid roles. However, in moving into this terrain it becomes ever more difficult to see where the knowledge mobilisation work ends and ‘normal’ organisational change or policy work begin. Partnership working, collaboratives, secondments and joint-funded initiatives are all used to try to build a shared view and new capacities, but tensions inevitably remain. Sustaining such work through turbulent change in partner organisations and maintaining commitment to this resource-intensive work in the absence of hard evidence of impact remain challenges.
We observed another tension in the field around the potential role of service users, members of the public and other ‘lay’ people (e.g. carers or parents) in knowledge mobilisation. An overwhelming majority of survey respondents (87%) agreed that the role of service users or patients in knowledge mobilisation is currently underdeveloped. However, the survey data also show that knowledge mobilisation activities actually involving service users, patients or members of the public were much less common than activities involving practitioners or policy-makers. Moreover, involving service users or members of the public was not a primary focus for the majority of organisations we interviewed, with some interviewees expressing concern at the tensions within their organisation in balancing the different target audiences. This suggests some conflict between what individuals believe is desirable and current practice in many organisations.
Archetypes of agency practice
Exploring agency activities, strategic thinking and the drivers of innovation led us to posit that a number of ‘archetypes’ underpin the diverse landscape of knowledge mobilisation (see Chapter 6). By relating these to our structured review of the literature, these archetypes allow us to see more clearly the tensions and trade-offs in portfolios of work. Our hope is that with further development work these archetypes may begin to provide agencies with insight and tools to help them shape future work.
The archetypes have been derived inductively but link explicitly to key literature-based debates. Although we cannot say from this work which of the archetypes are most promising in terms of effective knowledge mobilisation (and in any case, they are likely to be contingently effective), we hope that they may prove a useful tool for future agency development and evaluative work.
The archetypes may also be useful in assessing the completeness and complementarities in the broader field of knowledge mobilisation. That is, while there is no need for any one agency to ‘cover all bases’ in mobilising knowledge, a policy goal might be to encourage an appropriate ecosystem that balances a full portfolio of work across the archetypes.
Parallel debates in education and social care in the UK
In studying knowledge mobilisation approaches we also looked outside the health sector to see what lessons could be learned from the knowledge mobilisation approaches being used in cognate sectors. We chose to include education and social care in the UK in the study because these sectors share a similar political context to UK health care (e.g. they also face extensive media scrutiny and are subject to multiple and successive policy initiatives and reorganisations) and they share similar challenges of applying both instrumental and non-instrumental kinds of research to practice and policy. They are also interesting comparators because unlike health care their approaches to sharing research and encouraging its use have developed somewhat outside the ‘long shadow’ of evidence-based medicine200 and the infrastructure surrounding the implementation of clinical research knowledge. We limited our study of these sectors to the UK context for reasons of manageability within the study time frame.
We found that some interviewees from those sectors perceived that knowledge mobilisation was at a relatively early stage in the social care and education sectors compared with health care, though some did not share this view. It was interesting that for the most part it was interviewees from these sectors (or from cross-sector organisations) who expressed the view that there were lessons that could be learnt from approaches to knowledge mobilisation in other sectors; this was generally not something that was raised by interviewees from the health sector. Moreover, the policy initiatives in the UK to align health and social care organisations more closely did not appear to be having a major impact on agencies’ knowledge mobilisation approaches at the time of our study (e.g. in aligning the work of research agencies across these sectors). Perhaps this was because implementation of these initiatives was not yet far advanced, although interviewees did refer to the considerable organisational turbulence in these sectors and to the new potential audiences for research that would result from these changes.
Our data suggest that the social care and education sectors share some common knowledge mobilisation challenges with the health sector. Although we only looked at education agencies in the UK, the Evidence Informed Policymaking in Education in Europe (EIPEE) project, which mapped activities linking research evidence to policy in Europe, also found that although there was much national activity, there was little evaluation and little cross-Europe collaboration. 201 Our data also suggest that the social care sector faces additional challenges because of the diverse range of organisations providing social care services (organisations of different sizes and structures, spread across the public, private and voluntary sectors) and the diverse workforce (with a greater proportion of staff outside professional education programmes). One interviewee described it as a field of interacting circles and no straight lines. Thus, the potential range of target audiences for knowledge mobilisation activities in social care is considerable. We noted that one research agency in social care had begun to focus more on local authority organisations rather than on the diverse voluntary sector organisations which had struggled to identify their knowledge gaps. We also observed sensitivities in the social care sector around terms such as ‘evidence-based practice’; several agencies had deliberately chosen to use terms such as ‘evidence-informed practice’ instead in order to give due recognition to practitioner knowledge and experience.
In terms of the knowledge mobilisation approaches that had been developed in the social care and education sectors in the UK, a strong feature of these sectors was that, compared with the health research organisations in our study (many of which were large, well-funded long-standing organisations of international renown and whose reach spanned large geographical areas), many of the social and education research organisations were both smaller and younger and yet they were nevertheless punching well above their weight in terms of their recognition and influence. We noted a range of interesting initiatives that had been developed, including a media centre to connect researchers and journalists and a parallel centre to connect researchers and practitioners; the development of panels to involve diverse service users in research (e.g. school pupils with communication difficulties) and the spread of co-production approaches; high-profile involvement in national policy initiatives around evidence use; creative use of a ‘route map’ approach to tailor evidence to the needs of practitioners; the use of drama to present research findings; and a range of initiatives to bring together diverse groups who would not otherwise have been able to exchange views on policy and practice issues. Several interviewees expressed the view that developments in research use in these sectors had accelerated rapidly in recent years because of a range of policy developments and we gained some sense that the field might be moving more rapidly than in health care, which had seen a ‘slow burn’ over several decades.
Understanding the theoretical and other underpinnings of enacted strategies (i.e. the strategies and approaches being used by agencies)
The diversity of terminology and the richness of theoretical development in the academic literature have not always been fully embraced by the agency actors with whom we engaged. While some were very well versed in that literature, others saw it as somewhat disconnected from their day-to-day realities, as ‘jargonistic’ or as ‘too academic’. Data from interviews and the web survey both highlighted the diversity of terminology in use and the hesitancy about its usefulness. In some ways, this diversity and disputation reflects the conclusions from some of the literature: that convergence and agreement on terminology and underlying theory are unlikely to occur. 24,72,202
Even though theoretical considerations often failed to gain much explicit purchase, many agency actors appeared to have been indirectly influenced by some of the ideas in the knowledge mobilisation literature, such as ideas around linkage and exchange. However, in developing their knowledge mobilisation strategies it was clear that agencies were heavily influenced by a wide range of strategic and pragmatic considerations, such as stakeholder needs and funder requirements. In contrast, there were some notable absences in our discussions about key influences: we did not uncover a major role for the public and service users in shaping or delivering knowledge mobilisation; and there was only a limited role played by learning from evaluation work. Despite the diverse influences on agencies’ knowledge mobilisation strategies, we did discover an emerging consensus around a number of propositions about the underpinnings of effective strategies.
Connect and disconnect with the theoretical literature
Central to this project was creating an understanding of the ways in which the burgeoning academic and conceptual literature influenced or underpinned practices in knowledge mobilisation. Both the interview and the web survey data show that many of the models, theories and frameworks were being used by agencies to some extent but usually in eclectic ways. We often had to prompt interviewees to name any theories, models or frameworks in use. In the web survey, 7 of the 26 models listed were reported as being used by 25% or more of the respondents, and all but three were said to have been used by at least 5% of respondents. Some agencies were drawing explicitly on models and frameworks from the knowledge mobilisation and/or other literatures in developing their activities. Others had developed their own models and tools either from scratch or as adaptations of published work. Use of several models in combination was common, and in the survey there was a strong consensus (over 80% of respondents) that ‘organisations need to use a range of knowledge mobilisation frameworks rather than just one’. There was some evidence of ‘retrofitting’ where agencies developed a mix of activities and then drew on the literature later to provide support for these activities and enable further refinement.
In general, the influence of the models, theories and frameworks, where reported, tended to be at a conceptual rather than at a more practical level. It was not uncommon for agency actors to report that they drew more generally on the work of specific authors and key ideas from the literature and that this had shaped their thinking and influenced their activities in indirect and diffuse ways. To some extent this was because it was difficult to use the literature in more instrumental ways. In the interviews we observed a fair degree of frustration with the limitations of the existing models, theories and frameworks, which many interviewees perceived as overly complex and hard to operationalise. There was less consensus in the web survey about whether or not existing frameworks are hard to operationalise: almost half of the respondents agreed that this was the case, but a similar proportion did not have a strong view either way. Similarly, opinions were mixed on whether or not the lack of commonly accepted knowledge mobilisation frameworks hindered the development of knowledge mobilisation strategies. The survey data would, therefore, suggest a degree of acceptance of the need to use multiple frameworks in an eclectic way and would, on the whole, suggest low demand for (or low expectations of) better or more practical frameworks in the future.
Some interviewees made further comments on the limitations of the knowledge mobilisation literature. They highlighted that empirical support for many models, theories and frameworks is rather thin. Together with the participants in our two workshops, they also noted that there are ideas and evidence in other literatures that have not yet fed through to the knowledge mobilisation literature that might be helpful. This includes developments in cognitive psychology (e.g. ‘thinking fast and slow’) and in behavioural economics. A recent review shows that social constructivist learning theories have not yet been applied much in the knowledge mobilisation field. 203
Other shapers of agency strategy
Given the limitations of the existing literature, it was not surprising to find that some agencies had developed their knowledge mobilisation approaches somewhat independently of the published literature, taking a fairly pragmatic approach. Looking across the agencies included in our interviews, the influence of funders or other powerful stakeholders were significant drivers of agency strategy. This influence resulted from the remit agencies were given by their funder or governing body. Agencies also reported being influenced strongly by high-profile policy initiatives. Changing conditions in the sector(s) in which an agency operated was also a significant shaper of its strategy (e.g. new challenges arising from economic constraints or political developments). Such sector changes had prompted some agencies to change the focus or content of their activities and sometimes widen their target audiences to embrace new stakeholders.
The interests, experience and enthusiasm of key individuals had also often played a strong role in shaping an agency’s approach to knowledge mobilisation. Agencies built on local experience and tacit knowledge exploited opportunities as they arose and responded to personal inclinations and local capabilities. Interviewees referred to ‘ad hoc’ initiatives, and knowledge mobilisation strategies tended to be more emergent than planned and displayed many of the features associated with ‘muddling through’. 204
Notable absences
There were some notable absences in our informant data: things that might have been expected to be influential, but were not. Three potential influences stood out: involving service users or members of the public, learning from formal evaluations and learning from others.
As noted earlier, both the interviewees and the survey respondents acknowledged that the role of service users and the public in knowledge mobilisation activity is currently underdeveloped. There was little reported evidence that these concerns had shaped the knowledge mobilisation strategies of the majority of agencies included in our study. While sympathetic to the principle of involvement, most agencies struggled to develop such activities in the face of competing priorities, the absence of individuals with specific skills and experience in this area, and the desire to avoid tokenism.
Interviewees reported that some agencies had made changes in their knowledge mobilisation approaches in the light of formal evaluations of their knowledge mobilisation activity but (as we discuss in the next section) such evaluations were relatively rare. While many organisations drew on earlier experience and sometimes on more formal evaluations in developing their knowledge mobilisation activities, few had systematic mechanisms for learning from other organisations. In practice, few of the agencies we interviewed (from any sector) had any mechanisms for routinely learning from other organisations. This was the case even for those agencies that actively sought as part of their knowledge mobilisation activities to spread ‘promising practices’ more widely across their geographical or sectoral jurisdictions: these agencies did not often look externally for inspiration or learning to help to inform their own knowledge mobilisation activities. A similar pattern emerged from the survey: when asked to consider factors that might be important in developing knowledge mobilisation approaches, over one-third of respondents did not think it was important to consider if other organisations were using a particular approach, and only around 10% thought that this was very important.
Emerging consensus
From our interview data we had created a number of propositions about the underpinnings of effective knowledge mobilisation that we sought to test out in the survey. Many of these factors (relating to the role of evidence and experience, the importance of customisation and the need to fit activities to context) received high or very high levels of agreements (see Tables 16–18). Taken together, these propositions start to provide a blueprint for successful knowledge mobilisation, at least in the eyes of our survey respondents (independent empirical verification of the impacts of these is much harder to come by).
Learning from the success or otherwise of agency activities
A key goal of the project was to see how agencies’ knowledge mobilisation work was being evaluated (formally and informally) and to extract the learning from these evaluative efforts.
An absence of evidence
Both the knowledge mobilisation literature we reviewed and our data from this study show that there is a paucity of evaluative data on knowledge mobilisation approaches. As we discuss in detail in Chapter 3, there are many aspects of knowledge mobilisation of interest to agencies that currently have a limited evidence base. One such activity, which has nevertheless been promoted in the literature, is the use of knowledge brokers and other ‘mediator’ roles. Another is the use of portals, websites and online interactive spaces. In addition to the considerable gaps in relation to many individual knowledge mobilisation activities, there are significant gaps in the knowledge base around what agencies at this level (i.e. major research funders, producers and intermediaries) should do; whole-systems and integrated approaches to knowledge mobilisation; informatics interventions to support research use; the impact of research knowledge infrastructures; sustainability and scale-up of interventions; and the potential role of service users, patients and members of the public in knowledge mobilisation.
There is also an absence of explicit data about the necessary configurations, actions or resources to underpin knowledge mobilisation; with a few exceptions, the well-known knowledge mobilisation models have only been subject to limited empirical testing or even validation or verification. Linear approaches have been evaluated more than more complex forms, even though the latter are increasingly being recognised as more likely to be useful in their depiction of knowledge mobilisation processes. All of this means that much of the knowledge mobilisation activity seen at present may owe more to face validity, acceptability to stakeholders and the available of local expertise or other resources than to a coherent evidence base. 11
The challenges of evaluation
Our data suggest that this picture is unlikely to change in the short to medium term. Many agencies struggle with the very real challenges around effective evaluation of their knowledge mobilisation activities: how to discern and measure impact in a proportionate way that does not impose undue additional burdens on stakeholders; how to disentangle the contribution of particular programmes or initiatives from other contemporaneous influences; over what time scales should impact be assessed; and the lack of resources for evaluation, which mean that often evaluation is ad hoc or retrospective or has to be done ‘off the side of the desk’.
A survey of 265 directors of applied health or economic/social research organisations in Canada conducted in 2001 found that only around 1 in 10 organisations did any kind of evaluation of their knowledge translation work. 55 Over a decade later, the levels of evaluative activity shown in our study in a range of countries would suggest some improvement in this situation, but with continuing challenges. The survey data show that few organisations are able to carry out much evaluation of their knowledge mobilisation activities. Although 60% of respondents said that some evaluation was carried out, around one-quarter of organisations said that there was currently little or no evaluation of their knowledge mobilisation activities, and only 15% of respondent organisations said that they had a comprehensive programme of evaluation. It was interesting, however, that survey respondents were split over the question of whether or not the lack of evidence on the impact of knowledge mobilisation approaches was hindering development: although over half (59%) agreed that it was, 10% disagreed and almost one-third had no strong view.
Some promising avenues
The substantial challenges faced by organisations in conducting robust evaluations were reflected in the finding, described in detail in Chapter 4, that substantive published evaluations of knowledge mobilisation activities in the agencies in our study were relatively few in number. Our conclusions here are, therefore, drawn from 18 studies across seven agencies.
These evaluations suggest that the most promising approaches to knowledge mobilisation tend to tailor research-based resources to the preferences and access needs of different audiences and that interactive approaches are generally more effective than passive dissemination, although there remains a role for traditional approaches (e.g. some agencies found that service organisations did use their evidence briefings without any interaction with the agency). Ongoing networking activities between researchers, policy-makers and practitioners also seemed promising. Partnerships between practitioners and researchers, working together to facilitate evidence use (e.g. using action planning and service improvement methods), and co-production approaches (involving potential users in all stages of the research process) seem fruitful, but require substantial and sustained resources, clear processes and efforts to ensure that the partnerships are meaningful to all parties. Other key findings from the evaluations include the need to work with other research producers, funders and intermediaries in the sector to avoid duplication of knowledge mobilisation activities. This is likely to be difficult to achieve given the limited interaction between agencies which interviewees described and given the lack of clear evidence about the most effective composition and configuration of the wider knowledge mobilisation ‘eco-system’.
The limitations of the current evidence base and the challenges faced by agencies in evaluating their knowledge mobilisation activities have important implications. There is a pressing need for considered, logic-informed evaluations of knowledge mobilisation activities to be built into these activities as standard in order to build up an evidence base over time. Given the crucial role played by context–mechanism interactions (as discussed in Chapter 3), it may be unrealistic and inappropriate for agencies to await generic guidance about successful knowledge mobilisation approaches to emerge from the literature. It may be more useful for agencies to conduct and simultaneously evaluate (partially) evidence-informed knowledge mobilisation activities in their own setting and to be less reticent about spreading some of the informal learning within their own sector and to other sectors. This will require considerable support for those agencies which are currently carrying out only limited evaluation of their activities and will require concerted effort on a range of fronts to encourage funders and research commissioners not only to provide adequate resources for knowledge mobilisation activities but to embed robust evaluations alongside the knowledge mobilisation activities that they fund. There are resources that agencies can use in designing evaluations (see Table 2, but not all of these are well known. As part of our ongoing work with agencies, we are planning to discuss with them ways of sharing the learning from some of the knowledge mobilisation initiatives described to us and ways of providing support for those agencies wanting to do more and better evaluations.
Formative and experiential learning
Although the number of robust substantive evaluations of knowledge mobilisation activities is relatively low, there is a rich seam of formative learning from practical experience, as we have described in Chapters 4 and 5. Key themes from the formative learning that we heard about in the survey and in the interviews included the importance of, and the challenges of, attending to context: the need for great flexibility in working effectively in rapidly changing political environments in which the landscape of key players (both individuals and organisations) and organisational priorities frequently shifted. Allied to this theme was interviewees’ hard-won experience about the degree of persistence and stamina required to make a positive and persuasive case for the value of implementing research in policy and practice, particularly in the face of what some interviewees perceived as growing disillusionment on the part of some policy-makers and practitioners about the lack of a definitive or stable evidence base around key service challenges.
Interviewees commented on the wide range of skills and experience required for individuals to be successful and to thrive in this challenging environment: extensive interpersonal skills around active listening, reflection, interpretation and negotiation; tolerance of ambiguity and uncertainty; a willingness to experiment and to be flexible in changing direction and trying new approaches. This was not a field for novices and it was not a field for conscripts. Many interviewees had learnt that it was more effective to work with those researchers and practitioners who had an aptitude and enthusiasm for activities around sharing research, rather than trying to coerce those whose skills and attitudes were more suited to ‘backroom’ or more traditional research or practice roles. Indeed, there had been some unforeseen consequences from the movement towards greater engagement of academic researchers with the worlds of policy and practice. We heard examples of a ‘backlash’ against these developments, with some researchers showing hostility towards those academics actively engaged in knowledge mobilisation and some reinforcement of the status of traditional research career pathways, potentially jeopardising the career prospects of researchers working in research application roles. It may be most fruitful in academia to promote a common acknowledgement of the importance and value of sharing and applying appropriate bodies of research in policy and practice settings, but not to expect that all researchers will have the skills or desire to actively engage in knowledge mobilisation activities. Similarly, there may be value in expecting a basic level of engagement with research knowledge as a core component of being a practitioner in the health, education or social care sectors but reserving more technical or active engagement with research (e.g. in knowledge synthesis or in taking leading roles in co-production approaches) for those practitioners with particular skills and aspirations in this area.
We also observed some sensitivities in interviews and in survey responses around the extent to which knowledge mobilisation should be seen as an academic field as opposed to having as its starting point the practicalities of the ‘doing’ in policy and service contexts. A theme of many comments was the need to avoid ‘over-intellectualising’ the issues: while theory was important, an analysis that appeared too academic or the use of specialist terms and arcane jargon around knowledge mobilisation could be off-putting to policy-makers and practitioners and could hinder fruitful working around sharing research.
In a similar vein, survey and interview responses underlined the importance of balancing the desire for robust evidence and academic rigour with pragmatism and flexibility. Compromise was often necessary to balance the evidence with what was acceptable to end-users, and it was often necessary to take action in the absence of a strong evidence base in the interests of making some progress on pressing service or policy issues in the short to medium term.
A key lesson for many of the agencies in our study had been the importance of ‘user pull’: that in the majority of cases, the agenda had to be led by, or at least endorsed by, the potential users and not solely by the research producers. This required a degree of humility on the part of research funders and research producers: a willingness to accept that decisions in policy and practice settings emerged in response to a range of contextual factors and a range of different kinds of knowledge, of which research-based knowledge was only ever one part (and often a relatively small part). Co-produced research knowledge was more likely to be used in policy or practice, but this was not guaranteed, and effective co-production demanded commitment, persistence and sufficient resources.
There is a wealth of practical experience and rich learning that organisations have gained from their knowledge mobilisation work. The challenge is that much of this learning is currently ‘locked up’ within agencies and not widely shared for a range of reasons, including the additional resources required by busy organisations to collate, distil and share this learning. As part of our ongoing work with agencies, we hope to facilitate activities that will enable greater sharing of this experience.
Implications for effective knowledge mobilisation
The data gathered in this study provide a rich mapping of current approaches to knowledge mobilisation among three different kinds of agency: major research producers, intermediary agencies and large research funders. Our analysis of the theoretical and other underpinnings of the strategies in use and the learning gained from the success or otherwise of agency activities suggest some important implications for the development of effective knowledge mobilisation.
-
There is merit in looking across sectors when developing knowledge mobilisation approaches. Agencies in different sectors share some common knowledge mobilisation challenges and there are many parallels in the approaches that have been adopted to date. Nevertheless, there are also some important differences in approach, and a range of interesting agency-specific initiatives have been identified in this study from which others could learn. For reasons of practicality and manageability, we restricted our cross-sector mapping of approaches to the health, social care and education sectors but the potential for cross-sector learning is not restricted to these, as other sectors and policy domains (such as international development and environmental research) are also grappling with knowledge mobilisation challenges.
-
Cross-sector, and indeed interagency, learning is at present limited and there is scope to facilitate this. Agencies in the health sector appear on the whole to have less awareness of the potential for cross-sector learning than those operating in the education and social care sectors. This suggests that there is a need to facilitate cross-sector and interagency learning. There are potential benefits to be gained from developing mechanisms to enable agencies to (a) collate and share their formative learning and (b) engage with and learn more from each other.
-
There are insights and potential benefits to be gained from reflecting conceptually on current knowledge mobilisation activities. We have found it revealing to explore agencies’ knowledge mobilisation activities through the lens of a number of archetypes of agency practice. There was a great deal of interest in these archetypes when they were shared with those who participated in our second workshop. We intend to build on this interest and develop a reflective tool for agencies based on an extended and refined analysis of the archetypes. We are hopeful that such a tool will prove useful as agencies reflect on the future development of their knowledge mobilisation activities. The archetypes and an adapted version of the reflective tool are also likely to be useful in assessing the completeness and complementarities in the broader field of knowledge mobilisation. Such reflection might encourage an appropriate ecosystem that balances a full portfolio of knowledge mobilisation work across the archetypes.
-
There is scope for some constructive dialogue around terminology and theoretical development in the academic literature. The diversity of terminology and the richness of theoretical development in the academic literature have not always been fully embraced knowledge mobilisation agency actors. This is understandable given the complexity of some of the models, theories and frameworks, and the difficulties of operationalising many of these. Nevertheless, many of the agency actors we have engaged with are interested in ways of thinking about knowledge mobilisation issues and there is a need to develop a way of talking about them that enables people to share experiences and learning. We see potential in a user-friendly version of our conceptual map of the literature (see Chapter 3) as a resource to be used in developing an interactive dialogue around the implications of main ideas and debates encountered within and across each of the six domains of this map.
-
There is a need for sustained attention and support for the evaluation of knowledge mobilisation activities. Agencies are struggling with evaluation of their knowledge mobilisation activities, and without sustained attention and support this will continue to be underdeveloped to the detriment of agency learning and the development of knowledge mobilisation activities more broadly. To start to address these concerns, agencies that are not doing much evaluation of their knowledge mobilisation activities at present need help with conceptualising this and applying some appropriate evaluation approaches and tools. Sharing and learning from those agencies that have developed good evaluation frameworks and have experience of applying these successfully will be an important part of this. Evaluation is not a cost-free activity and it is not something that can be done ‘off the side of the desk’. Funders and commissioners need to be encouraged not only to fund knowledge mobilisation activities (with appropriate resources and time scales) but also to embed and provide resources for robust evaluations alongside the activities that they fund.
Implications for future research on knowledge mobilisation
The literature on knowledge mobilisation has grown considerably over the last two decades, but there are still many gaps in our knowledge and areas of research that are underdeveloped. There is a need to build on existing research and we would echo the plea of Greenhalgh et al. 58 that the relatively scarce resources for research in this area are not squandered on projects that are unlikely to add much to the existing knowledge base. For example, how much will be added by another study of the barriers to research use? Our study has identified a number of areas where the potential benefits of further research should pay dividends.
-
Drawing out knowledge mobilisation lessons from a wider range of emergent literatures. Although (as noted in Chapter 3) the knowledge mobilisation literature already draws on diverse disciplinary underpinnings and assumptions, there is a need to learn from a wider range of emergent literatures that show potential to enrich our understanding. Interviewees, workshop participants and members of our advisory board suggested several potential candidate literatures. These included cognitive psychology (e.g. ‘thinking fast and slow’) and behavioural economics.
-
More evaluation of knowledge mobilisation approaches. We have commented many times on the paucity of evaluations of many knowledge mobilisation activities and approaches, particularly evaluations that would inform what major research funders, producers and intermediaries do in relation to knowledge mobilisation. We have highlighted the need for evaluations of the effectiveness of knowledge brokers and other mediator roles, and of the use of portals, websites and online interactive spaces. Careful consideration needs to be given to the design of such evaluations, and we see a pressing need for logic-informed evaluations that pay due attention to CMO interactions.
-
Research on scaling up and sustaining knowledge mobilisation activities and approaches. Many knowledge mobilisation activities tend to be project based, relatively short term and somewhat disconnected from other activities. There is a need for research on how best to scale up and sustain these activities and approaches.
-
Further evaluation of the existing approaches for assessing research use and impact. Driven by the demands of research funders and the performance management regime in UK higher education, there has been a rapid growth in the assessment of research use and impact. There is a need to investigate the impact of the existing approaches to assessing research use and impact. What impact (positive and negative) do they have on various stakeholders? What impact do they have on incentives to engage in knowledge mobilisation activities? What impact do they have on the shape of these activities?
-
More research on applying systems theory to knowledge mobilisation. In Chapters 3 and 4 we noted increasing support for the idea that knowledge mobilisation approaches and activities should be guided by systems thinking but that practical tools and strategies have yet to emerge. There is a need for further research in this area that is focused on developing operational models and tools, and convincing case examples.
-
Further research on knowledge mobilisation archetypes and what combinations and configurations of archetypes work well. Alongside engaging agencies in a reflective discussion about different archetypes of knowledge mobilisation practice, there is a need to further refine these. There is also a need to consider what combinations and configurations of archetypes work well: what is an effective eco-system of agencies at this level – how can they work in complementary ways?
Concluding remarks
Having explored the linkages between the literature on knowledge and knowing, and the practices of different kinds of agency (major research producers, intermediaries and large research funders), we have identified connections and disconnections, and tensions and trade-offs. We have also developed an enhanced awareness of the many factors that shape knowledge mobilisation practices. Some of our data suggest that knowledge mobilisation theory and practice are proceeding largely in parallel, with relatively few points of connection. This disconnection occurs partly because theory is not providing many evidence-informed models and tools that can be readily used to develop practical activities. Practice is also rarely being captured in formal evaluations that feed back into theory development. The relative lack of evaluative work leads to the irony (noted by some of our interviewees and in the knowledge mobilisation literature) that knowledge mobilisation activities aimed at improving the use of research-based knowledge are themselves not informed by a good evidence base. There is a danger, however, that this is too bleak a picture of current theory–practice linkages, and it also underplays opportunities for building on current connections and learning from these. Even though theoretical models, theories and frameworks have often failed to gain much explicit purchase, many of our agencies reported being influenced in more diffuse ways by the ideas and debates in the knowledge mobilisation literature. In addition, despite the very modest levels of current evaluative work, agencies have developed a wealth of formative and experiential learning, which can be collated, distilled and shared. Although we have identified a wide range of knowledge mobilisation practices (often quite diverse and sometimes highly innovative) in our agencies, and three broad clusters of how agencies saw their roles, we have also noted an emerging consensus from agencies about what is needed to advance knowledge mobilisation practice.
Our findings reveal opportunities for further developing knowledge mobilisation practice and research, and it is our plan to continue to work with others to capitalise on these opportunities. The potential benefits of interagency and cross-sector learning are not currently being exploited, and there is scope to facilitate this. We see benefits in facilitating agencies to reflect conceptually on their knowledge mobilisation strategies and practice, and we plan to develop tailored versions of both the conceptual map and the archetypes which can be used as reflective tools and act as a focus for constructive dialogue. Agencies would also benefit from sustained support for the evaluation of the knowledge mobilisation activities. This support encompasses not only the sharing and developing evaluation expertise but also enhanced funding for such evaluations. Finally, we have identified five other priority areas for further funding and research: drawing out knowledge mobilisation lessons from a wider range of emergent literatures; research on scaling up and sustaining knowledge mobilisation activities and approaches; further evaluation of existing approaches for assessing research use and impact; research on applying systems theory to knowledge mobilisation; and further study of knowledge mobilisation archetypes and what combinations and configurations of archetypes work well. If we, and others, can make headway on at least some of these fronts, we are hopeful that the linkages between research effort, knowledge enhancement and informed action will begin to work to better effect.
Acknowledgements
We thank several colleagues for their contributions: Ms Tricia Tooman (Research Assistant, Social Dimensions of Health Institute, Universities of Dundee and St Andrews) conducted some of the interviews and assisted with administration for the second workshop; Ms Rosanne Bell (Administrative Officer, Social Dimensions of Health Institute) and Dr Fred Comerford (Institute Manager, Social Dimensions of Health Institute) provided administration and financial administration support, respectively; Ms Jennifer Kerr (IT Support Officer, School of Management, University of St Andrews) set up the project website; and Dr Joanne Coyle (Research Fellow, Social Dimensions of Health Institute) provided administrative support with the online survey.
We are very grateful to the advisory board for their expert guidance throughout the study: Dr Allan Best, Dr Jean-Louis Denis, Dr Jonathan Lomas, Dr Jacomine Ravensbergen, Professor Sally Redman, Professor Thomas Rundall, Ms Jacqueline Tetroe and Dr Vicky Ward.
At an early stage in the study we received expert advice on key organisations from many contacts in the education, social care and health-care sectors. We thank the Social Care Institute for Excellence for their assistance with this task.
We thank Deena Maggs (The King’s Fund Information and Library Service) and Janice Tripney (EPPI-Centre, Institute of Education, University of London) for the literature searches. We also thank Professor Vikki Entwistle, Chair in Health Services Reseach and Ethics, University of Aberdeen, for comments on public involvement in research.
Finally, we are very grateful to all those who took part in the interviews, in the two stakeholder workshops and in the online survey, and to all those who responded to our JISCMail requests for further information.
Contributions of authors
Professor Huw TO Davies (Professor of Health Care Policy and Management) led the project and was involved in all stages of the project and in the preparation of the report.
Dr Alison E Powell (Research Fellow) was involved in all stages of the project and in the preparation of the report.
Professor Sandra M Nutley (Professor of Public Policy and Management) was involved in all stages of the project and in the preparation of the report.
Disclaimers
This report presents independent research funded by the National Institute for Health Research (NIHR). The views and opinions expressed by authors in this publication are those of the authors and do not necessarily reflect those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health. If there are verbatim quotations included in this publication the views and opinions expressed by the interviewees are those of the interviewees and do not necessarily reflect those of the authors, those of the NHS, the NIHR, NETSCC, the HS&DR programme or the Department of Health.
References
- Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map?. J Contin Educ Health Prof 2006;26:13-24. http://dx.doi.org/10.1002/chp.47.
- Woolf SH. The meaning of translational research and why it matters. J Am Med Assoc 2008;299:211-13. http://dx.doi.org/10.1001/jama.2007.26.
- Davies HTO, Nutley S, Walter I. Why ‘knowledge transfer’ is misconceived for applied social research. J Health Serv Res Policy 2008;13:188-90. http://dx.doi.org/10.1258/jhsrp.2008.008055.
- Ferlie E, Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q 2001;79:281-315. http://dx.doi.org/10.1111/1468-0009.00206.
- Ramanujam R, Rousseau DM. The challenges are organizational not just clinical. J Organ Behav 2006;27:811-27. http://dx.doi.org/10.1002/job.411.
- Marshall M, Pronovost P, Dixon-Woods M. Promotion of improvement as a science. Lancet 2013;381:419-21. http://dx.doi.org/10.1016/S0140-6736(12)61850-9.
- Boaden R, Harvey G, Moxham C, Proudlove N. Quality Improvement: Theory and Practice in Healthcare. Coventry: University of Manchester/NHS Institute for Innovation and Improvement; 2008.
- Powell AE, Rushmer RK, Davies HTO. A Systematic Narrative Review of Quality Improvement Models in Health Care. Edinburgh: NHS Quality Improvement Scotland; 2009.
- Lomas J, Fulop N, Gagnon D, Allen P. On being a good listener: setting priorities for applied health services research. Milbank Q 2003;81:363-88. http://dx.doi.org/10.1111/1468-0009.t01-1-00060.
- Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ 2006;84:620-8. http://dx.doi.org/10.2471/BLT.06.030312.
- Tetroe J, Graham ID, Foy R, Robinson N, Eccles MP, Wensing M, et al. Health research funding agencies’ support and promotion of knowledge translation: an international study. Milbank Q 2008;86:125-55. http://dx.doi.org/10.1111/j.1468-0009.2007.00515.x.
- Nutley S, Morton S, Jung T, Boaz A. Evidence and policy in six European countries: diverse approaches and common challenges. Evid Policy 2010;6:131-44. http://dx.doi.org/10.1332/174426410X502275.
- Straus SE, Tetroe J, Graham ID, Straus SE, Tetroe J, Graham ID. Knowledge Translation In Health Care: Moving From Evidence To Practice. Chichester: John Wiley; 2013.
- Axford N, Morpeth L. Evidence-based programs in children’s services: a critical appraisal. Child Youth Serv Rev 2013;35:268-77. http://dx.doi.org/10.1016/j.childyouth.2012.10.017.
- Nutley SM, Walter I, Davies HTO. Using Evidence: How Research Can Inform Public Services. Bristol: Policy Press; 2007.
- Ferlie E, Crilly T, Jashapara A, Peckham A. Knowledge mobilisation in healthcare: a critical review of health sector and generic management literature. Soc Sci Med 2012;74:1297-304. http://dx.doi.org/10.1016/j.socscimed.2011.11.042.
- Kitto SC, Sargeant J, Reeves S, Silver I. Towards a sociology of knowledge translation: the importance of being dis-interested in knowledge translation. Adv in Health Sci Educ 2012;17:289-99. http://dx.doi.org/10.1007/s10459-011-9303-6.
- Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q 2007;85:729-68. http://dx.doi.org/10.1111/j.1468-0009.2007.00506.x.
- Ward V, House A, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy 2009;14:156-64. http://dx.doi.org/10.1258/jhsrp.2009.008120.
- Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy 2010;6:145-59. http://dx.doi.org/10.1332/174426410X502284.
- Contandriopoulos D, Lemire M, Denis J-L, Tremblay É. Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature. Milbank Q 2010;88:444-83. http://dx.doi.org/10.1111/j.1468-0009.2010.00608.x.
- McKibbon KA, Lokker C, Wilczynski NL, Ciliska D, Dobbins M, Davis DA, et al. A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel?. Implement Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-16.
- Colquhoun H, Leeman J, Michie S, Lokker C, Bragge P, Hempel S, et al. Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems and policies. Implement Sci 2014;9. http://dx.doi.org/10.1186/1748-5908-9-51.
- Greenhalgh T, Wieringa S. Is it time to drop the ‘knowledge translation’ metaphor? A critical literature review. J R Soc Med 2011;104:501-9. http://dx.doi.org/10.1258/jrsm.2011.110285.
- Smits P, Denis J-L. How research funding agencies support science integration into policy and practice: an international overview. Implement Sci 2014;9. http://dx.doi.org/10.1186/1748-5908-9-28.
- Lemay MA, Sá C. Complexity sciences: towards an alternative approach to understanding the use of academic research. Evid Policy 2012;8:473-94. http://dx.doi.org/10.1332/174426412X660133.
- Willis CD, Best A, Riley B, Herbert CP, Millar J, Howland D. Systems thinking for transformational change in health. Evid Policy 2014;10:113-26. http://dx.doi.org/10.1332/174426413X662815.
- Fox DM. History matters for understanding knowledge exchange. Milbank Q 2010;88:484-91. http://dx.doi.org/10.1111/j.1468-0009.2010.00609.x.
- Brown C. The policy agora: how power inequalities affect the interaction between researchers and policy makers. Evid Policy 2014;10:421-38. http://dx.doi.org/10.1332/174426514X672353.
- Oborn E, Barrett M, Racko G. Knowledge translation in healthcare: incorporating theories of learning and knowledge from the management literature. J Health Organ Manag 2013;27:412-31. http://dx.doi.org/10.1108/JHOM-01-2012-0004.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581-629. http://dx.doi.org/10.1111/j.0887-378X.2004.00325.x.
- Oborn E, Barrett M, Racko G. Knowledge Translation in Healthcare: A Review of the Literature. Cambridge: Judge Business School; 2010.
- Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231); 2005.
- Meyers DC, Durlak JA, Wandersman A. The Quality Implementation Framework: a synthesis of critical steps in the implementation process. Am J Community Psychol 2012;50:462-80. http://dx.doi.org/10.1007/s10464-012-9522-x.
- Davies B, Edwards N, Straus SE, Tetroe J, Graham ID. Knowledge Translation in Health Care: Moving From Evidence To Practice. Chichester: John Wiley; 2013.
- Straus S, Tetroe JM, Graham ID. Knowledge translation is the use of knowledge in health care decision making. J Clin Epidemiol 2011;64:6-10. http://dx.doi.org/10.1016/j.jclinepi.2009.08.016.
- Ward V, Smith S, House A, Hamer S. Exploring knowledge exchange: a useful framework for practice and policy. Soc Sci Med 2012;74:297-304. http://dx.doi.org/10.1016/j.socscimed.2011.09.021.
- Kilburn R, Frearson M. The Time Is Ripe for Evidence 2013. www.alliance4usefulevidence.org/the-time-is-ripe-for-evidence/ (accessed June 2014).
- Davies HTO, Nutley SM, Smith PC. What Works? Evidence-Based Policy and Practice in Public Services. Bristol: Policy Press; 2000.
- Shah A, Jacobs DO, Martins H, Harker M, Menezes A, McCready M, et al. DADOS-Survey: an open source application for CHERRIES-compliant Web surveys. BMC Med Inform Decis Making 2006;6. http://dx.doi.org/10.1186/1472-6947-6-34.
- Chizawsky LLK, Estabrooks CA, Sales AE. The feasibility of web-based surveys as a data collection tool: a process evaluation. Appl Nurs Res 2011;24:37-44. http://dx.doi.org/10.1016/j.apnr.2009.03.006.
- Matteson K, Anderson BL, Pinto SB, Lopes V, Schulkin J, Clark MA. Surveying ourselves: examining the use of a web-based approach for a physician survey. Eval Health Prof 2011;34:448-63. http://dx.doi.org/10.1177/0163278710391086.
- Millar MM, Dillman DA. Improving response to web and mixed-mode surveys. Public Opin Q 2011;75:249-69. http://dx.doi.org/10.1093/poq/nfr003.
- Dillman DA, Smyth JD. Design effects in the transition to web-based surveys. Am J Prev Med 2007;32:S90-6. http://dx.doi.org/10.1016/j.amepre.2007.03.008.
- Burns KEA, Duffett M, Kho ME, Meade MO, Adhikari NKJ, Sinuff T, et al. A guide for the design and conduct of self-administered surveys of clinicians. Can Med Assoc J 2008;179:245-52. http://dx.doi.org/10.1503/cmaj.080372.
- Kho ME, Rawski E, Makarski J, Brouwers MC. Recruitment of multiple stakeholders to health services research: lessons from the front lines. BMC Health Serv Res 2010;10. http://dx.doi.org/10.1186/1472-6963-10-123.
- Sánchez-Fernández J, Muñoz-Leiva F, Montoro-Rios FJ. Improving retention rate and response quality in web-based surveys. Comput Hum Behav 2012;28:507-14. http://dx.doi.org/10.1016/j.chb.2011.10.023.
- Sauermann H, Roach M. Increasing web survey response rates in innovation research: an experimental study of static and dynamic contact design features. Res Policy 2013;42:273-86. http://dx.doi.org/10.1016/j.respol.2012.05.003.
- Langley GJ, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide. San Francisco, CA: Jossey-Bass; 1996.
- Kilo CM. A framework for collaborative improvement: lessons from the Institute for Healthcare Improvement’s Breakthrough Series. Qual Manag Health Care 1998;6:1-13. http://dx.doi.org/10.1097/00019514-199806040-00001.
- Logan J, Graham ID. Towards a comprehensive model of health care research use. Sci Commun 1998;20:227-46. http://dx.doi.org/10.1177/1075547098020002004.
- Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care 1998;7:149-58. http://dx.doi.org/10.1136/qshc.7.3.149.
- Lomas J. Using linkage and exchange to move research into policy at a Canadian foundation. Health Affairs 2000;19:236-40. http://dx.doi.org/10.1377/hlthaff.19.3.236.
- Farkas M, Jette A, Tennstedt S, Haley S, Quinn V. Knowledge dissemination and utilization in gerontology: an organizing framework. Gerontologist 2003;43:47-56.
- Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. The Knowledge Transfer Study Group . how can research organizations more effectively transfer research knowledge to decision makers?. Milbank Q 2003;81:221-48. http://dx.doi.org/10.1111/1468-0009.t01-1-00052.
- Gabbay J, le May A. Evidence-based guidelines or collectively constructed ‘mindlines’? Ethnographic study of knowledge management in primary care. BMJ 2004;329:1013-17. http://dx.doi.org/10.1136/bmj.329.7473.1013.
- Gabbay J, le May A. Practice-Based Evidence for Healthcare: Clinical Mindlines. Abingdon: Routledge; 2011.
- Greenhalgh T, Robert G, Bate P, Kyriakidou O, Macfarlane F, Peacock R. How to Spread Good Ideas: A Systematic Review of the Literature on Diffusion, Dissemination and Sustainability of Innovations in Health Service Delivery and Organisation. London: NCCSDO; 2004.
- Levin B. Making research matter more. Educ Policy Anal Arch 2004;12:1-22. http://dx.doi.org/10.14507/epaa.v12n56.2004.
- Walter I, Nutley SM, Percy-Smith J, McNeish D, Frost S. Improving the Use of Research in Social Care: Knowledge Review 7. Social Care Institute for Excellence/Policy Press; 2004.
- Baumbusch J, Reimer Kirkham S, Basu Khan K, McDonald H, Semeniuk P, Tan E, et al. Pursuing common agendas: a collaborative model for knowledge translation between reserach and practice in clinical settings. Res Nurs Health 2008;31:131-40. http://dx.doi.org/10.1002/nur.20242.
- Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the Interactive Systems Framework for Dissemination and Implementation. Am J Commun Psychol 2008;41:171-81. http://dx.doi.org/10.1007/s10464-008-9174-z.
- Best A, Hiatt RA, Norman CD. Knowledge integration: conceptualizing communications in cancer control systems. Patient Educ Counsel 2008;71:319-27. http://dx.doi.org/10.1016/j.pec.2008.02.013.
- Best A, Terpstra J, Moor G, Riley B, Norman C, Glasgow R. Building knowledge integration systems for evidence-informed decisions. J Health Organ Manag 2009;23:627-41. http://dx.doi.org/10.1108/14777260911001644.
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4. http://dx.doi.org/10.1186/1748-5908-4-50.
- Kontos P, Poland B. Mapping new theoretical and methodological terrain for knowledge translation: contributions from critical realsim and the arts. Implement Sci 2009;4.
- May C, Finch T. Implementing, embedding and integrating practices: an outline of normalization process theory. Sociology 2009;43:535-54. http://dx.doi.org/10.1177/0038038509103208.
- McWilliam C, Kothari A, Ward-Griffin C, Forbes D, Leipert B. South West Community Care Access Centre Home Care Collaboration . Evolving the theory and praxis of knowledge translation through social interaction: a social phenomenological study. Implement Sci 2009;4. http://dx.doi.org/10.1186/1748-5908-4-26.
- Wilson KM, Brady TJ, Lesesne C. NCCDPHP Work Group on Translation . An organizing framework for translation in public health: the Knowledge to Action Framework. Prev Chronic Dis 2011;8.
- Gholami J, Majdzadeh R, Nedjat S, Nedjat S, Maleki K, Ashoorkhani M, et al. How should we assess knowledge translation in research organizations: designing a knowledge translation self-assessment tool for research institutes (SATORI). Health Res Policy Syst 2011;9. http://dx.doi.org/10.1186/1478-4505-9-10.
- Education Endowment Foundation . School Improvement Model n.d. http://educationendowmentfoundation.org.uk/projects/research-leads-improving-students-education/ (accessed 14 July 2014).
- Estabrooks CA, Thompson DS, Lovely JE, Hofmeyer A. A guide to knowledge translation theory. J Contin Educ Health Prof 2006;26:25-36. http://dx.doi.org/10.1002/chp.48.
- Graham ID, Tetroe J. KT Theories Research Group . Some theoretical underpinnings of knowledge translation. Acad Emerg Med 2007;14:936-41. http://dx.doi.org/10.1111/j.1553-2712.2007.tb02369.x.
- Davies HTO, Powell A, Ward V, Smith S. Supporting NHS Scotland in Developing a New Knowledge-to-Action Model. Edinburgh: NHS Education for Scotland; 2011.
- Levin B. Mobilising research knowledge in education. Lond Rev Educ 2011;9:15-26. http://dx.doi.org/10.1080/14748460.2011.550431.
- Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci 2012;7. http://dx.doi.org/10.1186/1748-5908-7-50.
- Powell BJ, McMillen JC, Proctor EK, Carpenter C, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev 2012;69:123-57. http://dx.doi.org/10.1177/1077558711430690.
- Holmes BJ, Finegood DT, Riley BL, Best A, Brownson R, Colditz G, et al. Dissemination and Implementation Research in Health: Translating Science To Practice. Oxford: Oxford University Press; 2012.
- Levin B, Cooper A, Fenwick T, Farrell L. Knowledge Mobilization and Educational Research – Politics, Languages and Responsibilities. London: Routledge; 2012.
- Pentland D, Forsyth K, Maciver D, Walsh M, Murray R, Irvine L, et al. Key characteristics of knowledge transfer and exchange in healthcare: integrative literature review. J Adv Nurs 2011;67:1408-25. http://dx.doi.org/10.1111/j.1365-2648.2011.05631.x.
- Helfrich CD, Damschroder LJ, Hagedorn HJ, Daggett G, Sahay A, Ritchie M, et al. A critical synthesis of literature on the promoting action on research implementation in health services (PARIHS) framework. Implement Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-82.
- Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A guide for applying a revised version of the PARIHS framework for implementation. Implement Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-99.
- Rycroft-Malone J, Seers K, Chandler J, Hawkes CA, Crichton N, Allen C, et al. The role of evidence, context and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci 2013;8. http://dx.doi.org/10.1186/1748-5908-8-28.
- Goldman J, Meuser J, Lawrie L, Rogers J, Reeves S. Interprofessional primary care protocols: a strategy to promote an evidence-based approach to teamwork and the delivery of care. J Interprof Care 2010;24:653-66. http://dx.doi.org/10.3109/13561820903550697.
- Campbell B. Applying knowledge to generate action: a community-based knowledge translation framework. J Contin Educ Health Prof 2010;30:65-71. http://dx.doi.org/10.1002/chp.20058.
- Straus SE, Graham ID, Taylor M, Lockyer J. Development of a mentorship strategy: a knowledge translation case study. J Contin Educ Health Prof 2008;28:117-22. http://dx.doi.org/10.1002/chp.179.
- Scott SD, Osmond M, O’Leary K, Graham I, Grimshaw J, Klassen T, et al. Barriers and supports to implementation of MDI/spacer use in nine Canadian pediatric emergency departments: a qualitative study. Implement Sci 2009;4. http://dx.doi.org/10.1186/1748-5908-4-65.
- Ilott I, Gerrish K, Booth A, Field B. Testing the Consolidated Framework for Implementation Research on health care innovations from South Yorkshire. J Eval Clin Pract 2013;19:915-24.
- Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst 2011;9. http://dx.doi.org/10.1186/1478-4505-9-26.
- Sudsawad P. Knowledge Translation: Introduction to Models, Strategies and Measures. Austin, TX: National Center for the Dissemination of Disability Research; 2007.
- Levin B. Thinking about Knowledge Mobilization n.d.
- Oborn E. Facilitating implementation of the translational research pipeline in neurological rehabilitation. Curr Opin Neurol 2012;25:676-81. http://dx.doi.org/10.1097/WCO.0b013e32835a35f2.
- Walter I, Nutley S, Davies HTO. What works to promote evidence-based practice? A cross-sector review. Evid Policy 2005;1:335-63. http://dx.doi.org/10.1332/1744264054851612.
- Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximising the impact of systematic reviews in healthcare decision making: a systematic scoping review of knowledge-translation resources. Milbank Q 2011;89:131-56. http://dx.doi.org/10.1111/j.1468-0009.2011.00622.x.
- Perrier L, Mrklas K, Lavis JN, Straus SE. Interventions encouraging the use of systematic reviews by health policymakers and managers: a systematic review. Implement Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-43.
- Boyko JA, Lavis JN, Abelson J, Dobbins M, Carter N. Deliberative dialogues as a mechanism for knowledge translation and exchange in health systems decision-making. Soc Sci Med 2012;75:1938-45. http://dx.doi.org/10.1016/j.socscimed.2012.06.016.
- Murphy K, Fafard P, O’Campo P, Dunn JR. Rethinking Social Epidemiology: Towards A Science of Change. Dordrecht: Springer; 2012.
- Pitchforth E, Nolte E, Miani C, Winpenny E. Options for Effective Mechanisms to Support Evidence-Informed Policymaking in RMNCH in Asia and the Pacific. Cambridge: RAND Europe; 2013.
- Bhattacharyya O, Hayden L, Zwarenstein M, Straus SE, Tetroe J, Graham ID. Knowledge Translation in Health Care: Moving from Evidence to Practice. Chichester: John Wiley; 2013.
- Bhattacharyya O, Reeves S, Zwarenstein M. What is implementation research? Rationale, concepts and practices. Res Soc Work Pract 2009;19:491-502. http://dx.doi.org/10.1177/1049731509335528.
- Davies H, Nutley S. Learning More about How Research-Based Knowledge gets Used – Guidance in the Development of New Empirical Research. Discussion paper for the William T. Grant Foundation. New York, NY: William T. Grant Foundation; n.d.
- Popay J, Collins M. The Public Involvement Impact Assessment Framework Guidance. Liverpool and Exeter: Universities of Lancaster; 2014.
- Knowledge into Action for NHS Scotland: Methods, Strategic National Projects and an Evaluation Framework: Report to NHS Education for Scotland and Healthcare Improvement Scotland. Edinburgh: Centre for Research on Families and Relationships; 2013.
- Research Uptake: A Guide for DFID-Funded Research Programmes. London: DFID; 2013.
- Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J. Measuring Research: A Guide To Research Evaluation Frameworks and Tools. Cambridge: RAND Europe; 2013.
- Kok M, Schuit A. Contribution mapping: a method for mapping the contribution of research to enhance its impact. Health Res Policy Syst 2012;10. http://dx.doi.org/10.1186/1478-4505-10-21.
- Bhattacharyya OK, Estey EA, Zwarenstein M. Methodologies to evaluate the effectiveness of knowledge translation interventions: a primer for researchers and health care managers. J Clin Epidemiol 2011;64:32-40. http://dx.doi.org/10.1016/j.jclinepi.2010.02.022.
- InSource Resource Group . The CAPTURE Project: Reviewing KTE Indicators and Data Collection Tools 2010.
- Straus SE, Tetroe J, Graham ID, Zwarenstein M, Bhattacharyya O, Shepperd S. Monitoring use of knowledge and evaluating outcomes. CMAJ 2010;182:E94-8. http://dx.doi.org/10.1503/cmaj.081335.
- Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. Ottawa, ON: Canadian Academy of Health Sciences; 2009.
- Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv Res 2006;6. http://dx.doi.org/10.1186/1472-6963-6-134.
- Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst 2003;1. http://dx.doi.org/10.1186/1478-4505-1-2.
- Estabrooks CA, Squires JE, Strandberg E, Nilsson-Kajermo K, Scott SD, Profetto-McGrath J, et al. Towards better measures of research utilization: a collaborative study in Canada and Sweden. J Adv Nurs 2011;67:1705-18. http://dx.doi.org/10.1111/j.1365-2648.2011.05610.x.
- Weiss CH. The many meanings of research utilization. Public Admin Rev 1979;39:426-31. http://dx.doi.org/10.2307/3109916.
- Scott SD, Albrecht L, O’Leary K, Ball GD, Hartling L, Hofmeyer A, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci 2012;7. http://dx.doi.org/10.1186/1748-5908-7-70.
- Weiss CH. The haphazard connection: social science and public policy. Int J Educ Res 1995;23:137-50. http://dx.doi.org/10.1016/0883-0355(95)91498-6.
- Nutley SM, Walter I, Davies HTO. Promoting evidence-based practice: models and mechanisms from cross-sector review. Res Soc Work Pract 2009;19:552-9. http://dx.doi.org/10.1177/1049731509335496.
- Denis J-L, Lehoux P, Straus SE, Tetroe J, Graham ID. Knowledge Translation in Health Care: Moving From Evidence To Practice. Chichester: John Wiley; 2013.
- Nonaka I. A dynamic theory of organizational knowledge creation. Organ Sci 1994;5:14-37. http://dx.doi.org/10.1287/orsc.5.1.14.
- Greenhalgh T. What is this knowledge that we seek to ‘exchange’?. Milbank Q 2010;88:492-9. http://dx.doi.org/10.1111/j.1468-0009.2010.00610.x.
- Nutley SM, Powell AE, Davies HTO. What Counts as Good Evidence?. London: Alliance for Useful Evidence; 2013.
- Petticrew M, Roberts H. Evidence, hierarchies and typologies: horses for courses. J Epidemiol Commun Health 2003;57:527-9. http://dx.doi.org/10.1136/jech.57.7.527.
- Bagshaw S, Bellomo R. The need to reform our assessment of evidence from clinical trials: a commentary. Philos Ethics Humanit Med 2008;3. http://dx.doi.org/10.1186/1747-5341-3-23.
- Wilson MG, Lavis JN, Travers R, Rourke SB. Community-based knowledge transfer and exchange: helping community-based organizations link research to action. Implement Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-33.
- Fazekas M, Burns T. Exploring the Complex Interaction between Governance and Knowledge in Education. OECD Publishing; 2012.
- Holmes B, Scarrow G, Schellenberg M. Translating evidence into practice: the role of health research funders. Implement Sci 2012;7. http://dx.doi.org/10.1186/1748-5908-7-39.
- Cordingley P. Research and evidence-informed practice: focusing on practice and practitioners. Cambridge J Education 2008;38:37-52. http://dx.doi.org/10.1080/03057640801889964.
- Brown C. The ‘policy preferences model’: a new perspective on how researchers can facilitate the take-up of evidence by educational policy makers. Evid Policy 2012;8:455-72. http://dx.doi.org/10.1332/174426412X660106.
- Ovretveit J, Hempel S, Magnabosco JL, Mittman BS, Rubenstein LV, Ganz DA. Guidance for research–practice partnerships (R-PPs) and collaborative research. J Health Organ Manag 2014;28:115-26. http://dx.doi.org/10.1108/JHOM-08-2013-0164.
- Cooper A. Knowledge mobilisation in education across Canada: a cross case analysis of 44 research brokering organisations. Evid Policy 2014;10:29-5. http://dx.doi.org/10.1332/174426413X662806.
- Honig MI, Venkateswaran N. School-central office relationships in evidence use: understanding evidence use as a systems problem. Am J Educ 2012;118:199-222. http://dx.doi.org/10.1086/663282.
- Ettelt S, Mays N, Nolte E. Policy-research linkage: what we have learned from providing a rapid response facility for international healthcare comparisons to the Department of Health in England. Evid Policy 2013;9:245-54. http://dx.doi.org/10.1332/174426413X662608.
- Riley BL. Knowledge integration in public health: a rapid review using systems thinking. Evid Policy 2012;8:417-32. http://dx.doi.org/10.1332/174426412X660089.
- Willis CD, Mitton C, Gordon J, Best A. System tools for system change. BMJ Qual Saf 2012;21:250-62. http://dx.doi.org/10.1136/bmjqs-2011-000482.
- Sebba J. An exploratory review of the role of research mediators in social science. Evid Policy 2013;9:391-408. http://dx.doi.org/10.1332/174426413X662743.
- Cooper A. Knowledge Mobilization Intermediaries in Education n.d.
- Cameron D, Russell DJ, Rivard L, Darrah J, Palisano R. Knowledge brokering in children’s rehabilitation organizations: perspectives from administrators. J Contin Educ Health Prof 2011;31:28-33. http://dx.doi.org/10.1002/chp.20098.
- Chew S, Armstrong N, Martin G. Institutionalising knowledge brokering as a sustainable knowledge translation solution in healthcare: how can it work in practice?. Evid Policy 2013;9:335-51. http://dx.doi.org/10.1332/174426413X662734.
- Lightowler C, Knight C. Sustaining knowledge exchange and research impact in the social sciences and humanities: investing in knowledge broker roles in UK universities. Evid Policy 2013;9:317-34. http://dx.doi.org/10.1332/174426413X662644.
- Meagher L. The invisible made visible: using impact evaluations to illuminate and inform the role of knowledge intermediaries. Evid Policy 2013;9:409-18.
- Phipps D, Morton S. Qualities of knowledge brokers: reflections from practice. Evid Policy 2013;9:255-65. http://dx.doi.org/10.1332/174426413X667784.
- Ellen ME, Leon G, Bouchard G, Lavis JN, Ouimet M, Grimshaw JM. What supports do health system organizations have in place to facilitate evidence-informed decision-making? A qualitative study. Implement Sci 2013;8. http://dx.doi.org/10.1186/1748-5908-8-84.
- Thompson GN, Estabrooks CA, Degner LF. Clarifying the concepts in knowledge transfer: a literature review. J Adv Nurs 2006;53:691-70. http://dx.doi.org/10.1111/j.1365-2648.2006.03775.x.
- Stacey D, Hill S, Straus SE, Tetroe J, Graham ID. Knowledge Translation in Health Care: Moving from Evidence to Practice. Chichester: John Wiley; 2013.
- Walter I, Nutley S, Davies H. Research Impact: A Cross Sector Review. St Andrews: Research Unit for Research Utilisation, University of St Andrews; 2003.
- Wilson PM, Petticrew M, Calnan MW, Nazareth I. Disseminating research findings: what should researchers do? A systematic scoping review of conceptual frameworks. Implement Sci 2010;5. http://dx.doi.org/10.1186/1748-5908-5-91.
- Tabak R, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012;43:337-50. http://dx.doi.org/10.1016/j.amepre.2012.05.024.
- Boaz A, Baeza J, Fraser A. European Implementation Score Collaborative Group (EIS) . Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Research Notes 2011;4. http://dx.doi.org/10.1186/1756-0500-4-212.
- LaRocca R, Yost J, Dobbins M, Ciliska D, Butt M. The effectiveness of knowledge translation strategies used in public health: a systematic review. BMC Public Health 2012;12. http://dx.doi.org/10.1186/1471-2458-12-751.
- Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev 2012;69:123-57. http://dx.doi.org/10.1177/1077558711430690.
- Perspectives On Context. London: The Health Foundation; 2014.
- Pettigrew AM. The Awakening Giant: Continuity and Change in Imperial Chemical Industries. Oxford: Basil Blackwell; 1985.
- Pettigrew A, McKee L, Ferlie E. Understanding change in the NHS. Public Admin 1988;66:297-31. http://dx.doi.org/10.1111/j.1467-9299.1988.tb00696.x.
- Armenakis AA, Bedeian AG. Organizational change: a review of theory and research in the 1990s (Yearly Review of Management). J Manag 1999;25:293-307. http://dx.doi.org/10.1177/014920639902500303.
- Pawson R, Tilley N. Realistic Evaluation. London: Sage; 1997.
- Moat KA, Lavis JN, Abelson J. How contexts and issues influence the use of policy-relevant research syntheses: a critical interpretive synthesis. Milbank Q 2013;91:604-48. http://dx.doi.org/10.1111/1468-0009.12026.
- Dopson S. Debate: why does knowledge stick? What we can learn from the case of evidence-based health care. Public Money Manag 2006;26:85-6. http://dx.doi.org/10.1111/j.1467-9302.2006.00505.x.
- Nicolini D, Powell J, Conville P, Martinez-Solano L. Managing knowledge in the healthcare sector: a review. Int J Manag Rev 2008;10:245-63. http://dx.doi.org/10.1111/j.1468-2370.2007.00219.x.
- Bowen S, Graham ID, Straus SE, Tetroe J, Graham ID. Knowledge Translation in Health Care: Moving From Evidence To Practice. Chichester: John Wiley; 2013.
- Evans D. Patient and public involvement in research in the English NHS: a documentary analysis of the complex interplay of evidence and policy. Evid Policy 2013;10:361-77. http://dx.doi.org/10.1332/174426413X662770.
- Snape D, Kirkham J, Preston J, Popay J, Britten N, Collins M, et al. Exploring areas of consensus and conflict around values underpinning public involvement in health and social care research: a modified Delphi study. BMJ Open 2014;4. http://dx.doi.org/10.1136/bmjopen-2013-004217.
- Mathie E, Wilson P, Poland F, McNeilly E, Howe A, Staniszewska S, et al. Consumer involvement in health research: a UK scoping and survey. Int J Consum Stud 2014;38:35-44. http://dx.doi.org/10.1111/ijcs.12072.
- Abelson J, Blacksher EA, Li KK, Boesveld SE, Goold SD. Public deliberation in health policy and bioethics: mapping an emerging, interdisciplinary field. J Public Deliberation 2013;9:1-35.
- Oxman AD, Lewin S, Lavis JN, Fretheim A. SUPPORT Tools for evidence-informed health Policymaking (STP) 15: engaging the public in evidence-informed policymaking. Health Res Policy Syst 2009;7. http://dx.doi.org/10.1186/1478-4505-7-S1-S15.
- Boivin A, Lehoux P, Burgers J, Grol R. What are the key ingredients for effective public involvement in health care improvement and policy decisions? A randomized trial process evaluation. Milbank Q 2014;92:319-50. http://dx.doi.org/10.1111/1468-0009.12060.
- Brett J, Staniszewska S, Mockford C, Seers K, Herron-Marx S, Byliss H. The PIRICOM Study: A Systematic Review of the Conceptualisation, Measurement, Impact and Outcomes of Patients and Public Involvement in Health and Social Care Research. London: UKCRC; 2010.
- NHS Bradford and Airedale . Translating Research into Practice in Leeds and Bradford (TRiP-LaB) n.d. www.york.ac.uk/res/triplab/documents/outline.pdf (accessed 14 July 2014).
- Bray K, Laker S, Ilott I, Gerrish K. After Action Review: An Evaluation Tool. Sheffield: CLAHRC for South Yorkshire; 2013.
- Ovretveit J. Widespread focused improvement: lessons from international health for spreading specific improvements to health services in high-income countries. Int J Qual Health Care 2011;23:239-46. http://dx.doi.org/10.1093/intqhc/mzr018.
- Robinson V. Student-Centered Leadership. San Francisco, CA: Jossey-Bass; 2011.
- Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-based healthcare. Int J Evid Based Healthcare 2005;3:207-15. http://dx.doi.org/10.1111/j.1479-6988.2005.00026.x.
- Buxton M, Hanney S. How can payback from health research be assessed?. J Health Serv Res and Policy 1996;1:35-43.
- US Department of Veterans Affairs . QUERI – Quality Enhancement Research Initiative n.d. www.queri.research.va.gov/about (accessed 14 July 2014).
- Argyris C, Schon D. Theory In Practice: Increasing Professional Effectiveness. San Francisco, CA: Jossey-Bass; 1974.
- McLean R, Tucker J. Evaluation of CIHR’s Knowledge Translation Funding Program. Ottawa, ON: Canadian Institutes of Health Research; 2013.
- McLean RKD, Graham ID, Bosompra K, Choudhry Y, Coen SE, McLeod M, et al. Understanding the performance and impact of public knowledge translation funding interventions: protocol for an evaluation of Canadian Institutes of Health Research knowledge translation funding programs. Implement Sci 2012;7. http://dx.doi.org/10.1186/1748-5908-7-57.
- McMurray A. Centre for Effective Services: Review of Outcomes and Impact 2008–2011. Dublin: Centre for Effective Services; 2012.
- Morton S. Knowledge Exchange At CRFR: Past, Present, Future. Edinburgh: Centre for Research on Families and Relationships; 2011.
- Growing Up in Scotland: Assessing Contribution and Impact. Edinburgh: Centre for Research on Families and Relationships; 2012.
- About Families: Project Report. Edinburgh: Centre for Research on Families and Relationships; 2013.
- About Families: What Have We Learned About Evidence To Action?. Edinburgh: Centre for Research on Families and Relationships; 2013.
- Morton S. Assessing Research Impact: A Case Study of Participatory Research. Edinburgh: Centre for Research on Families and Relationships; 2013.
- Taking Stock: A Summary of ESRC’s Work to Evaluate the Impact of Research on Policy and Practice. London: ESRC; 2009.
- Branching Out: New Directions in Impact Evaluation from the ESRC’s Evaluation Committee. London: ESRC; 2011.
- Cultivating Connections: Innovation and Consolidation in the ESRC’s Impact Evaluation Programme. London: ESRC; 2013.
- Michael Smith Foundation for Health Research Health of Population Networks: A Strategy for Knowledge Translation and Exchange. Final Report to the KTE Working Group. Waterloo: Knowledge Impact Strategies Comsulting Ltd; 2009.
- Edwards A. SCIE’s Profile and Impact: Summary of Findings and SCIE’s Response. London: Social Care Institute for Excellence; 2007.
- Goldman R. Research into the Impact of SCIE. London: Social Care Institute for Excellence; 2007.
- Towards Co-Production: Taking Participation to the Next Level. London: Social Care Institute for Excellence; 2012.
- Wensing M, Bal R, Friele R. Knowledge implementation in healthcare practice: a view from the Netherlands. BMJ Qual Saf 2012;21:439-42. http://dx.doi.org/10.1136/bmjqs-2011-000540.
- Øvretveit J, Klazinga N. Linking research to practice: the organisation and implementation of the Netherlands health and social care improvement programmes. Health Policy 2013;109:175-86. http://dx.doi.org/10.1016/j.healthpol.2012.11.005.
- Øvretveit J, Klazinga N. Learning from large-scale quality improvement through comparisons. Int J Qual Health Care 2012;24:463-9. http://dx.doi.org/10.1093/intqhc/mzs046.
- National Institute for Health Research . Evaluation, Trials and Studies: HS&Amp;DR Research Theme – CLAHRCs n.d. www.nets.nihr.ac.uk/projects/browse/?collection = netscc&browsetype = hs_dr_theme&browse_view = CLAHRCs (accessed 14 July 2014).
- Stetler CB, Mittman BS, Francis J. Overview of the VA Quality Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI series. Implement Sci 2008;3. http://dx.doi.org/10.1186/1748-5908-3-8.
- Smith J, Walshe K, Hunter DJ. The ‘redisorganisation’ of the NHS. BMJ 2001;323:1262-3. http://dx.doi.org/10.1136/bmj.323.7324.1262.
- Oborn E, Barrett M, Prince K, Racko G. Balancing exploration and exploitation in transferring research into practice: a comparison of five knowledge translation entity archetypes. Implement Sci 2013;8. http://dx.doi.org/10.1186/1748-5908-8-104.
- Greenwood R, Hinings CR. Organizational design types, tracks and the dynamics of strategic change. Organ Stud 1988;9:293-316. http://dx.doi.org/10.1177/017084068800900301.
- Walshe K, Davies HTO. Health research, development and innovation in England from 1988 to 2103: from research production to knowledge mobilization. J Health Serv Res Policy 2013;18:1-12. http://dx.doi.org/10.1177/1355819613502011.
- Cho YI, Johnson TP, VanGeest JB. Enhancing surveys of health care professionals: a meta-analysis of techniques to improve response. Eval Health Prof 2013;36. http://dx.doi.org/10.1177/0163278713496425.
- Sackett DL, Rosenberg WMC, Muir Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71-2. http://dx.doi.org/10.1136/bmj.312.7023.71.
- Gough D, Tripney J, Kenny C, Buk-Berge E. Evidence Informed Policymaking in Education in Europe: EIPEE Final Project Report. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2011.
- Crilly T, Jashapara A, Ferlie E. Research Utilisation and Knowledge Mobilisation: A Scoping Review of the Literature: Report for the National Institute for Health Research Service Delivery and Organisation Programme. London: HMSO; 2010.
- Thomas A, Menon A, Boruff J, Rodriguez AM, Ahmed S. Applications of social constructivist learning theories in knowledge translation for healthcare professionals: a scoping review. Implement Sci 2014;9. http://dx.doi.org/10.1186/1748-5908-9-54.
- Lindblom C. The science of muddling through. Public Admin Rev 1959;19:79-88. http://dx.doi.org/10.2307/973677.
- Oliver K, Innvaer S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res 2014;14. http://dx.doi.org/10.1186/1472-6963-14-2.
- Caswill C, Lyall C. Knowledge brokers, entrepreneurs and markets. Evid Policy 2013;9:353-69. http://dx.doi.org/10.1332/174426413X662671.
- Barwick MA, Schachter HM, Bennett, LM, McGowan J, Ly M, Wilson A, et al. Knowledge translation efforts in child and youth mental health: a systematic review. J Evidence-Based Social Work 2012;9:369-95. http://dx.doi.org/10.1080/15433714.2012.663667.
- Ellen ME, Lavis JN, Ouimet M, Grimshaw J, Bedard P-O. Determining research knowledge infrastructure for healthcare systems: a qualitative study. Implement Sci 2011;6. http://dx.doi.org/10.1186/1748-5908-6-60.
- Cooper A, Levin B, Campbell C. The growing (but still limited) importance of evidence in education policy and practice. J Educ Change 2009;10:159-71. http://dx.doi.org/10.1007/s10833-009-9107-0.
- Grimshaw JM, Eccles MP. Is evidence-based implementation of evidence-based care possible?. Med J Aust 2004;180:S50-1.
- Hemsley-Brown J. Facilitating research utilisation: a cross-sector review of research evidence. Int J Public Sector Manag 2004;17:534-52. http://dx.doi.org/10.1108/09513550410554805.
Appendix 1 Literature searches
Knowledge mobilisation: search of health and social care databases
PubMed
Covering clinical medicine, biomedical sciences, nursing, dentistry, preclinical sciences and health-care systems, PubMed has over 20 million citations from scholarly journals dating back to 1950. Citations are provided from MEDLINE, life science journals and online books, and links to full-text articles are provided where possible.
British Nursing Index
The British Nursing Index is a database of journal articles, most of which come from 250 UK nursing and midwifery titles with only a small number coming from non-UK specialist journals. The articles date from 1994, with abstracts included from 2004 onwards, and cover areas such as accident and emergency nursing, breast cancer, evidence-based practice, learning disabilities, midwifery, nurse practitioners, orthopaedic nursing, perinatal and neonatal mortality, psychiatric nursing, reflective practice, student nurses, theatre nursing and wounds.
Cumulative Index to Nursing and Allied Health Literature
Cumulative Index to Nursing and Allied Health Literature aims to provide information for all allied health professionals by offering complete coverage of English-language nursing journals and publications from the National League for Nursing and the American Nurses’ Association. As well as journal articles, some books, book chapters, dissertations and conference proceedings are offered. The database goes back to 1982 and also offers some technology journals, as well as articles on consumer health, health promotion and legal issues within health care.
EMBASE
Covering journal articles from around 7000 journals in about 70 countries, EMBASE is a major biomedical and pharmaceutical database with a focus on western European sources. Over 40% of the content is drug-related, contained within areas such as drug research, pharmacology, pharmacy, pharmacoeconomics, pharmaceutics, toxicology, human medicine (both clinical and experimental), basic biological research, health policy and management, public occupational and environmental health, drug dependence and abuse, psychiatry, forensic science and biomedical engineering. The database contains resources dating back to 1974.
Health Business™ Elite
Health Business™ Elite contains full-text content from more than 480 journals, and has content ranging back to 1922. The journals included centre around health-care administration and other non-clinical aspects of health-care institution management, such as hospital management, hospital administration, marketing, human resources, computer technology, facilities management and insurance.
Health Management Information Consortium
This database combines resources from the Library and Information Services of both the Department of Health and The King’s Fund.
The Department of Health Library and Information Services database
With a focus on the NHS and health service quality, and including data from 1983 onwards, the Department of Health database covers areas within health service policy, management and administration. It also holds information on the planning, design, construction and maintenance of health service buildings, as well as occupational health, control and regulation of medicines, medical equipment and supplies, and social care and personal social services.
The King’s Fund Information and Library Services database
Also with a UK focus, this database covers health management and services, social care, service development and NHS organisation and administration. Resources include journal articles, books, reports and pamphlets, and cover the years from 1979 onwards.
ScienceDirect
ScienceDirect is a full-text scientific database offering journal articles and book chapters from more than 2500 peer-reviewed journals and more than 11,000 books. There are currently more than 11 million articles/chapters, a content base that is growing at a rate of almost 0.5 million additions per year.
Social Care Online
Provided by the SCIE, Social Care Online makes use of journal articles, websites, research reviews, government documents and legislation, and service user knowledge in order to provide information on all aspects of social care. Its content dates back to 1960 and is widely used by academics, researchers, information professionals, practitioners, service users and carers, social care managers, policy-makers and students.
Search strategies
su: dissemination of research and su: (systematic reviews or literature reviews) or (su: (evidence based medicine or evidence based practice or evidence based policy or evidence based management or research) and su: implementation and su: (systematic reviews or literature reviews)) or (su: (change management) and su: (systematic reviews or literature reviews)) 2000>
‘Knowledge Management’[MeSH, Majr] AND (literature review OR systematic review OR meta analysis) [Freetext] Filters: Publication date from 2011/01/01 to 2013/12/31
‘Organizational Innovation’[MeSH, Majr:NoExp] AND (literature review OR systematic review OR meta analysis) [Freetext] Filters: Publication date from 2011/01/01 to 2013/12/31
(‘Diffusion of Innovation’[Majr:noexp] OR ‘Technology Transfer’[Majr]) AND (literature review OR systematic review OR meta analysis) [Freetext] Filters: Publication date from 2011/01/01 to 2013/12/31
(implement* OR management OR transfer OR utilisation OR utilisation OR diffusion OR dissemination OR uptake) [title/Abstract] AND (systematic review* OR literature review* OR meta analysis) [Title/Abstract] AND (knowledge OR innovation* OR technology OR guideline* OR research) [Title]
Knowledge Management[MeSH, Majr] OR Organizational Innovation [MeSH, Majr:NoExp] OR Diffusion of Innovation [Majr] OR Technology Transfer [MeSH, Majr] OR Information Dissemination [MeSH, Majr] AND (literature review OR systematic review OR meta analysis) [Freetext]
(su: (dissemination of research or dissemination of information or research implementation) and su: (systematic reviews or literature reviews or meta analysis)) OR ((su: (evidence based medicine or evidence based practice or evidence based policy or evidence based management or research) and su: implementation and su: (systematic reviews or literature reviews or meta analysis)) OR (su: (change management) and su: (systematic reviews or literature reviews or meta analysis)) 2000>
Information dissemination and ‘health care’ or ‘health care’
Topic: (research implementation or research dissemination or knowledge management) and Topic: (systematic reviews or literature reviews or research review) 2000>
‘knowledge mobil*’ and health or social
‘knowledge translation’ and health
Overview
The literature search on knowledge mobilisation includes:
-
131 journal articles
-
10 reports and briefings
-
one textbook.
The search was carried out from 2000 (although a few older references were includes as they appeared relevant) and a variety of key terms were used to locate the most appropriate sources (see Search strategies).
The term ‘knowledge mobilisation’ brought back limited results across all databases. The most common terms used were ‘knowledge transfer’ and ‘knowledge diffusion.
The main themes of the search were around ‘knowledge transfer’/’diffusion of knowledge’ to improve patient care and patient outcomes from research; however, some of the examples relate to improved clinical effectiveness and implementation of clinical guidelines.
Poor knowledge transfer seems to lead to poor quality/safety. Some papers discussed disseminating innovation led to better patient outcomes in areas such as chronic obstructive pulmonary disease, cancer and mental health.
There were a number of references which discussed ‘technology transfer’ but these referred mainly to IT issues within organisations and so were omitted from the final literature search. Additionally, ‘knowledge management’ had different meanings, but results which referred to the information architecture of an organisation were also removed from the search results, as these did not match the initial brief.
Two key reports to highlight:
-
Building and aligning energy for change: a review of published and grey literature, initial concept testing and development (Land 2013).
-
Spreading and sustaining innovations in health service delivery and organisation (NCCSDO 2004).
Knowledge mobilisation: search of education/social science databases (February 2013)
Search strategy
Sources
The following six bibliographic databases were searched in March 2013:
-
Education Resources Information Center (ERIC) (ProQuest)
-
Australian Education Index (AEI) (ProQuest)
-
British Education Index (BEI) (ProQuest)
-
Applied Social Sciences Index and Abstracts (ASSIA) (ProQuest)
-
International Bibliography of the Social Sciences (IBSS) (ProQuest)
-
Social Sciences Citation Index (SSCI) (Web of Knowledge).
A date filter was used for each database (set at 1 January 2000 to present date).
Search queries
((SU.EXACT(‘Research Utilisation’) OR SU.EXACT(‘Information Utilisation’) OR SU.EXACT(‘Information Dissemination’) OR SU.EXACT(‘Knowledge Management’) OR SU.EXACT(‘Information Transfer’)) OR (TI,AB(‘knowledge management’ OR ‘ knowledge transfer’ OR ‘knowledge sharing’ OR ‘knowledge capture’ OR ‘knowledge utili*’ OR ‘evidence utili*’ OR ‘research utili*’ OR ‘knowledge implement*’ OR ‘evidence implement*’ OR ‘research implement*’ OR ‘knowledge mobil*’ OR ‘knowledge exchange’ OR ‘knowledge transmission’ OR ‘knowledge translation’ OR ‘knowledge diffusion’ OR ‘knowledge broker*’ OR ‘knowledge creation’ OR ‘knowledge dissemination’ OR ‘knowledge generation’ OR ‘knowledge uptake’)))
AND
((SU.EXACT(‘Literature Reviews’) OR SU.EXACT(‘Meta Analysis’)) OR (TI,AB(‘systematic review*’ OR meta-analy* OR ‘meta analy*’) OR TI,AB(literature NEAR/3 review*)))
Australian Education Index (AEI)(SU.EXACT(‘Research Utilisation’) OR SU.EXACT(‘Information Utilisation’) OR SU.EXACT(‘Information Dissemination’) OR SU.EXACT(‘Knowledge Management’) OR SU.EXACT(‘Information Transfer’))
AND
(SU.EXACT(‘Literature Reviews’) OR SU.EXACT(‘Meta Analysis’))
British Education Index (BEI)(SU.EXACT(‘Research Utilisation’) OR SU.EXACT(‘Information Utilisation’) OR SU.EXACT(‘Information Dissemination’) OR SU.EXACT(‘Knowledge Management’) OR SU.EXACT(‘Information Transfer’))
AND
(SU.EXACT(‘Literature Reviews’) OR SU.EXACT(‘Meta Analysis’))
Applied Social Sciences Index and Abstracts (ASSIA)(SU.EXACT(‘Research transfer’) OR SU.EXACT(‘Dissemination’) OR SU.EXACT(‘Information transfer’))
AND
(SU.EXACT(‘Literature reviews’) OR SU.EXACT(‘Review articles’) OR SU.EXACT(‘Systematic reviews’) OR SU.EXACT(‘Meta-analysis’))
International Bibliography of the Social Sciences (IBSS)(SU.EXACT(‘Knowledge management’) OR SU.EXACT(‘Transmission of knowledge’) OR SU.EXACT(‘Information exchange’) OR SU.EXACT(‘Knowledge transfer’) OR SU.EXACT(‘Information dissemination’))
AND
SU.EXACT(‘Review articles’)
Social Sciences Citation Index (SSCI)Title = (‘knowledge management’ OR ‘ knowledge transfer’ OR ‘knowledge sharing’ OR ‘knowledge capture’ OR ‘knowledge utili*’ OR ‘evidence utili*’ OR ‘research utili*’ OR ‘knowledge implement*’ OR ‘evidence implement*’ OR ‘research implement*’ OR ‘knowledge mobil*’ OR ‘knowledge exchange’ OR ‘knowledge transmission’ OR ‘knowledge translation’ OR ‘knowledge diffusion’ OR ‘knowledge broker’ OR ‘knowledge creation’ OR ‘knowledge dissemination’ OR ‘knowledge generation’ OR ‘knowledge uptake’) AND Title = (‘literature review*’ OR ‘systematic review*’ OR ‘meta analy*’)
Results
The results of the individual searches were as follows:
-
ERIC – 251 items
-
AEI – 26 items
-
BEI – six items
-
ASSIA – seven items
-
IBSS – 44 items
-
SSCI – 35 items.
With duplicates removed, a total of 362 items were identified across all databases.
Appendix 2 The key reviews
2014
Oliver K, Innvaer S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res 2014;14.
2013
Caswill C, Lyall C. Knowledge brokers, entrepreneurs and markets. Evid Policy 2013;9:353–69.
Ellen ME, Leon G, Bouchard G, Lavis JN, Ouimet M, Grimshaw JM. What supports do health system organizations have in place to facilitate evidence-informed decision-making? A qualitative study. Implement Sci 2013;8:84.
Long JC, Cunningham FC, Braithwaite J. Bridges, brokers and boundary spanners in collaborative networks: a systematic review. BMC Health Serv Res 2013;13:158.
Moat KA, Lavis JN, Abelson J. How contexts and issues influence the use of policy-relevant research syntheses: a critical interpretive synthesis. Milbank Q 2013;91:604–48.
Oborn E, Barrett M, Racko G. Knowledge translation in healthcare: Incorporating theories of learning and knowledge from the management literature. J Health Organ Manag 2013;27:412–31.
Pitchforth E, Nolte E, Miani C, Winpenny E. Options for Effective Mechanisms to Support Evidence-Informed Policymaking in RMNCH in Asia and the Pacific. Cambridge: RAND Europe; 2013.
Sebba J. An exploratory review of the role of research mediators in social science. Evid Policy 2013;9:391–408.
2012
Barwick MA, Schachter HM, Bennett, LM, McGowan J, Ly M, Wilson A, et al. Knowledge translation efforts in child and youth mental health: a systematic review. J Evidence-Based Social Work 2012;9:369–95.
Boyko JA, Lavis JN, Abelson J, Dobbins M, Carter N. Deliberative dialogues as a mechanism for knowledge translation and exchange in health systems decision-making. Soc Sci Med 2012;75:1938–45.
Brown C. The ‘policy preferences model’: a new perspective on how researchers can facilitate the take-up of evidence by educational policy makers. Evid Policy 2012;8:455–72.
Facer K, Manners P, Agusita E. Towards A Knowledge Base for University-Public Engagement: Sharing Knowledge, Building Insight, Taking Action. Bristol: NCCPE; 2012.
Fazekas M, Burns T. Exploring the Complex Interaction between Governance and Knowledge in Education. OECD Education Working Papers 67. OECD Publishing; 2012.
Ferlie E, Crilly T, Jashapara A, Peckham A. Knowledge mobilisation in healthcare: a critical review of health sector and generic management literature. Soc Sci Med 2012;74:1297–304.
Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci 2012;7:50.
Holmes BJ, Finegood DT, Riley BL, Best A. Systems Thinking in Dissemination and Implementation Research. In Brownson R, Colditz G, Proctor E, editors. Dissemination and Implementation Research in Health: Translating Science To Practice. Oxford: Oxford University Press; 2012.
Holmes B, Scarrow G, Schellenberg M. Translating evidence into practice: the role of health research funders. Implement Sci 2012;7:39.
Honig MI, Venkateswaran N. School-central office relationships in evidence use: understanding evidence use as a systems problem. Am J Educ 2012;118:199–222.
LaRocca R, Yost J, Dobbins M, Cilisaka D, Butt M. The effectiveness of knowledge translation strategies used in public health: a systematic review. BMC Public Health 2012;12:751.
Levin B, Cooper A. Theory, Research and Practice in Mobilizing Research Knowledge in Education. In Fenwick T, Farrell L, editors. Knowledge Mobilization and Educational Research – Politics, Languages and Responsibilities. London: Routledge; 2012. pp. 17–29.
Murphy K, Fafard, P. Knowledge Translation and Social Epidemiology: Taking Power, Politics and Values Seriously. In O’Campo P, Dunn JR, editors. Rethinking Social Epidemiology: Towards A Science of Change. Dordrecht: Springer; 2012. pp. 267–84.
Oborn E. Facilitating implementation of the translational research pipeline in neurological rehabilitation. Curr Opin Neurol 2012;25:676–81.
Panisset U, Koehlmoos TP, Alkhatib AH, Pantoja T, Singh P, Kengey-Kayondo J, et al. Implementation research evidence uptake and use for policy-making. Health Res Policy Sys 2012;10:20.
Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev 2012;69:123–57.
Riley BL. Knowledge integration in public health: a rapid review using systems thinking. Evid Policy 2012;8:417–32.
Scott SD, Albrecht L, O’Leary K, Ball GD, Hartling L, Hofmeyer A, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci 2012;7:70.
2011
Banzi R, Moja L, Pistotti V, Fachini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst 2011;9.
Boaz A, Baeza J, Fraser A, for the European Implementation Score Collaborative Group (EIS). Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes 2011;4.
Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in healthcare decision making: a systematic scoping review of knowledge-translation resources. Milbank Q 2011;89:131–56.
Ellen ME, Lavis JN, Ouimet M, Grimshaw J, Bedard P-O. Determining research knowledge infrastructure for healthcare systems: a qualitative study. Implement Sci 2011;6.
Greenhalgh T, Wieringa S. Is it time to drop the ‘knowledge translation’ metaphor? A critical literature review. J R Soc Med 2011;104:501–9.
Levin B. Mobilising research knowledge in education. Lond Rev Educ 2011;9:15–26.
Pentland D, Forsyth K, Maciver D, Walsh M, Murray R, Irvine L, et al. Key characteristics of knowledge transfer and exchange in healthcare: integrative literature review. J Adv Nurs 2011;67:1408–25.
Straus S, Tetroe JM, Graham ID. Knowledge translation is the use of knowledge in health care decision making. J Clin Epidemiol 2011;64:6–10.
2010
Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy 2010;6:145–59.
Contandriopoulos D, Lemire M, Denis J-L, Tremblay E. Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature, Milbank Q 2010;88:444–83.
Oborn E, Barrett M, Racko G. Knowledge Translation in Healthcare: A Review of the Literature. Working Paper Series 5/2010. Cambridge: Judge Business School; 2010.
Pollard A, Oancea A. Unlocking Learning? Towards Evidence-informed Policy and Practice in Education. Report of the UK Strategic Forum for Research in Education, 2008–2010. London: SFRE; 2010.
Wilson PM, Petticrew M, Calnan MW, Nazareth I. Disseminating research findings: what should researchers do? A systematic scoping review of conceptual frameworks. Implement Sci 2010;5:91.
2009
Best A, Terpstra J, Moor G, Riley B, Norman C, Glasgow R. Building knowledge integration systems for evidence-informed decisions. J Health Organ Manag 2009;23:627–41.
Bhattacharyya O, Reeves S, Zwarenstein M. What is implementation research? Rationale, concepts and practices. Res Soc Work Pract 2009;19:491–502.
Cooper A, Levin B, Campbell C. The growing (but still limited) importance of evidence in education policy and practice. J Educ Change 2009;10:159–71.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4:50.
Gold M. Pathways to the use of health services research in policy. Health Serv Res 2009;44:1111–36.
Nutley SM, Walter I, Davies HTO. Promoting evidence-based practice: models and mechanisms from cross-sector review. Res Soc Work Pract 2009;19:552–9.
Ward V, House A, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy 2009;14:156–64.
2008
Best A, Hiatt RA, Norman CD. Knowledge integration: conceptualizing communications in cancer control systems. Patient Educ Couns 2008;71:319–27.
Cordingley P. Research and evidence-informed practice: focusing on practice and practitioners. Cambridge J Educ 2008;38:37–52.
Estabrooks CA, Derksen L, Winther C, Lavis JN, Scott SD, Wallin L, et al. The intellectual structure and substance of the knowledge utilization field: a longitudinal author co-citation analysis, 1945 to 2004. Implement Sci 2008;3.
Levin B. Thinking about knowledge mobilization. Paper prepared for an invitational symposium sponsored by the Canadian Council on Learning and the Social Sciences and Humanities Research Council of Canada, May 2008, Vancouver, BC, Canada.
Nicolini D, Powell J, Conville P, Martinez-Solano L. Managing knowledge in the healthcare sector: a review. Int J Manag Rev 2008;10:245–63.
Tetroe J, Graham ID, Foy R, Robinson N, Eccles MP, Wensing M, et al. Health research funding agencies’ support and promotion of knowledge translation: an international study. Milbank Q 2008;86:125–55.
2007
Graham ID, Tetroe J, the KT Theories Research Group. Some theoretical underpinnings of knowledge translation. Acad Emerg Med 2007;14:936–41.
Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q 2007;85:729–68.
Sudsawad P. Knowledge Translation: Introduction to Models, Strategies and Measures. Austin, TX: National Center for the Dissemination of Disability Research; 2007.
2006
Dopson S. Debate: Why does knowledge stick? What we can learn from the case of evidence-based health care. Public Money Manag 2006;26:85–6.
Estabrooks CA, Thompson DS, Lovely JE, Hofmeyer A. A guide to knowledge translation theory. J Contin Educ Health Prof 2006;26:25–36.
Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Education Health Prof 2006;26:13–24.
Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ 2006;84:620–8.
Thompson GN, Estabrooks CA, Degner LF. Clarifying the concepts in knowledge transfer: a literature review. J Adv Nurs 2006;53:691–701.
Van de Ven AH, Johnson PE. Knowledge for theory and practice. Acad Manag Rev 2006;31:802–21.
2005
Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231); 2005.
Walter I, Nutley S, Davies HTO. What works to promote evidence-based practice? A cross-sector review Evid Policy 2005;1:335–63.
2004
Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.
Grimshaw JM, Eccles MP. Is evidence-based implementation of evidence-based care possible? Med J Aust 2004;180(Suppl.):S50–1.
Hemsley-Brown J. Facilitating research utilisation: a cross-sector review of research evidence. Int J Public Sector Manag 2004;17:534–52.
Walter I, Nutley SM, Percy-Smith J, McNeish D, Frost S. Improving the Use of Research in Social Care: Knowledge Review 7. Social Care Institute for Excellence/Policy Press; 2004.
2003
Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst 2003;1:2.
Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J, The Knowledge Transfer Study Group. How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q 2003;81:221–48.
Walter I, Nutley S, Davies H. Research Impact: A Cross Sector Review. St Andrews: Research Unit for Research Utilisation, University of St Andrews; 2003.
2001
Rynes SL, Bartunek JM, Daft RL. Across the great divide: knowledge creation and transfer between practitioners and academics. Acad Manag J 2001;44:340–55.
Appendix 3 List of organisations included in the study
BMJ Evidence Centre.
Cancer Research UK.
Centre for Evidence-Based Medicine at the University of Oxford.
Chief Scientist Office.
Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer).
Department of Health Policy Research Programme.
Fuse.
Health Foundation.
Health Services Research Network (HSRN).
Healthcare Improvement Scotland.
Improvement Science London.
The King’s Fund.
Managers in Partnership.
Medical Research Council (MRC).
National Institute for Social Care and Health Research, Wales.
NHS Confederation.
NHS Education for Scotland (NES).
NHS Health Scotland (Evidence for Action team).
NHS Improving Quality.
NICE.
NIHR CLAHRC East Midlands.
NIHR CLAHRC East of England.
NIHR CLAHRC Greater Manchester.
NIHR CLAHRC North Thames.
NIHR CLAHRC North West Coast.
NIHR CLAHRC North West London.
NIHR CLAHRC Oxford.
NIHR CLAHRC South London.
NIHR CLAHRC South West Peninsula.
NIHR CLAHRC Wessex.
NIHR CLAHRC West.
NIHR CLAHRC West Midlands.
NIHR CLAHRC Yorkshire and Humber.
NIHR Comprehensive Clinical Research Network (CCRN).
NIHR Health Services and Delivery Research Programme.
NIHR HTA programme.
NIHR INVOLVE.
NIHR Programme Grants for Applied Research programme.
NIHR Public Health Research programme.
NIHR Research for Patient Benefit programme.
NIHR Trainees Coordinating Centre.
Nuffield Trust.
RAND Europe.
Scottish Collaboration for Public Health Research and Policy.
Scottish Public Health Network.
Society for Academic Primary Care.
TRiP-LaB (Translating Research into Practice in Leeds and Bradford).
Warwick Evidence.
Wellcome Trust.
Wolfson Centre, Durham University.
Health care: Europe excluding the UKAgency for Quality in Medicine, Berlin.
European Health Management Association.
European Observatory on Health Systems and Policies.
Health Research Board, Ireland.
INSERM (Institute National de la Santé et de la Recherche Medicale).
Karolinska Institute.
NIVEL – the Netherlands Institute for Health Serv Res.
Norwegian Knowledge Centre for the Health Services (NOKC).
World Health Organization (WHO) Europe Health Evidence Network.
ZonMW.
Health care: Australia and New ZealandAustralian and New Zealand Association for Health Professional Educators (ANZAHPE).
Australian Centre for Health Services Innovation.
Australian Primary Health Care Research Institute.
Centre of Excellence in Intervention and Prevention Science (CEIPS).
Drug Policy Modelling Program at the National Drug and Alcohol Research Centre.
George Institute.
Health Research Council of New Zealand.
Health Services Research Association of Australia and New Zealand.
Joanna Briggs Institute.
National Health and Medical Research Council (NHMRC).
Primary Health Care Research and Information Service.
Productivity Commission.
Public Health Association of Australia (PHAA).
Sax Institute, Australia.
South Australian Health and Medical Research Institute (SAHMRI).
Health care: CanadaAlberta Innovates – Health Solutions.
Canadian Agency for Drugs and Technologies in Health (CADTH).
Canadian Association for Health Services and Policy Research (CAHSPR).
Canadian Cochrane Centre.
Canadian Foundation for Healthcare Improvement (CFHI).
Canadian Institute for Health Information.
Canadian Institutes of Health Research (CIHR).
Canadian Partnership against Cancer.
Canadian Patient Safety Institute.
Centre for Evidence-Based Medicine (Toronto).
Evidence Exchange Network.
Evidencenetwork.ca.
Health Information Research Unit, McMaster University.
Health Quality Council.
Health Systems Evidence at McMaster University.
InSource.
Institut national de santé publique du Quebec (INSPQ).
Institute for Work and Health (IWH).
Knowledge Impact Strategies.
Mental Health Commission of Canada.
Michael Smith Foundation for Health Research (MSFHR).
Ministry of Health and Long-Term Care, Ontario.
National Collaborating Centre for Methods and Tools.
Public Health Agency Canada.
Quebec Population Health Research Network (QPHRN).
Research Impact Canada.
Seniors Health Knowledge Network.
University of Waterloo’s Propel Centre for Population Health Impact.
Health care: USAAcademy Health.
Agency for Healthcare Research and Quality (AHRQ).
Center for Evaluation and Innovation, Kaiser Permanente Care Management Institute.
Center on Health Care Effectiveness.
Commonwealth Fund.
Hilltop Institute.
IHI.
Institute of Medicine.
Kaiser Family Foundation.
NASHP (National Academy for State Health Policy).
National Center for the Dissemination of Disability Research (NCDDR).
National Health Policy Forum.
RAND.
Urban Institute.
Department of Veterans Affairs (VA)/Veterans Health Administration (VHA).
Education: UKBritish Educational Research Association.
Centre for Evaluation and Modelling, Durham University.
Centre for the Use of Research and Evidence in Education (CUREE).
CfBT Educational Trust.
Coalition for Evidence-Based Education (CEBE).
Department for Education.
Department of Education, University of Oxford.
Department of Education, University of York.
Education Analytical Services Division of the Scottish Government.
EEF.
Educational Evidence Portal.
EPPI-Centre.
Faculty of Education, University of Cambridge.
Graduate School of Education, University of Bristol.
Institute for Effective Education, University of York.
The Key.
Mirandanet.
National Foundation for Educational Research.
Northern Ireland Education Research Forum (NIERF).
Scottish Educational Research Association.
Sutton Trust.
Teacher Development Trust.
Universities Council for the Education of Teachers (UCET).
Social care: UKADASS (Association of Directors of Adult Social Services – England).
ADCS (Association of Directors of Children’s Services – England).
Association of Directors of Social Work – Scotland.
Centre for Effective Services.
Centre for Excellence for Looked After Children in Scotland (CELCIS).
College of Occupational Therapists.
College of Social Work.
Dementia Services Development Centre, University of Stirling.
Institute for Research and Innovation in Social Services (IRISS).
Institute of Child Care Research (Queen’s University Belfast).
Making Research Count.
Personal Social Services Research Unit (PSSRU – Kent).
Personal Social Services Research Unit (PSSRU – London School of Economics).
Personal Social Services Research Unit (PSSRU – Manchester).
Research in Practice.
Research in Practice for Adults.
Scottish Consortium for Learning Disability.
Social Care Institute for Excellence (SCIE).
Social Services Research Group (SSRG).
STRADA (Scottish Training on Drugs and Alcohol).
WithScotland.
Cross-sector (UK)Alliance for Useful Evidence.
Arts and Humanities Research Council (AHRC).
British Academy.
Campbell Collaboration.
Centre for Research on Families and Relationships.
Centre for Reviews and Dissemination (CRD) (NIHR).
Colebrooke Centre.
Community University Partnership Programme (CUPP), University of Brighton.
Economic and Social Research Council (ESRC).
Esmee Fairbairn Foundation.
Health and Social Care R&D, Northern Ireland.
Higher Education Funding Council for England (HEFCE).
Institute for Voluntary Action Research (IVAR).
Joseph Rowntree Foundation.
National Children’s Bureau and C4EO.
National Co-ordinating Centre for Public Engagement (NCCPE).
NESTA.
Nuffield Foundation.
Scottish Funding Council (SFC).
Social Policy Research Unit, University of York.
Social Research Unit, Dartington.
Third Sector Research Centre.
Young Foundation.
Universities UK.
Organisations in bold type participated in the interviews.
The three CLAHRCs selected to participate in the interviews were a convenience sample from the nine ‘first wave’ CLAHRCs (funded 2008–13).
Appendix 4 Participant information sheet and consent form
Appendix 5 Interview topic guide
Theme | |
---|---|
Introduction |
|
Explore how the organisation sees its role in relation to knowledge mobilisation |
|
Explore the main knowledge mobilisation activities at this organisation |
|
Explore the thinking behind these approaches being used in the organisation |
|
Explore the ‘target’ audience/users |
|
Explore formal or informal evaluation of the organisation’s knowledge mobilisation activities |
|
Explore formative learning and practical experience |
|
Other issues |
|
Closing remarks |
|
Appendix 6 Participant organisations at the two workshops
Workshop 1: 25 June 2013
Bangor University.
Cancer Research UK.
Chelsea and Westminster Hospital.
Economic and Social Research Council.
Health and Social Care R&D Division (Northern Ireland).
Healthcare Improvement Scotland.
InSource Research Group.
Institute for Research and Innovation in Social Services.
JL and Associates.
King’s College London.
NIHR INVOLVE.
NIHR Trainees Coordinating Centre.
National Institute for Social Care and Health Research (NISCHR), Welsh Assembly Government.
NHS Education for Scotland.
NIHR Health Services and Delivery Research Programme.
Sheffield Teaching Hospitals NHS Trust.
Social Care Institute for Excellence.
Technology Development Group.
The Health Foundation.
University College London.
University of Oxford.
Workshop 2: 30 April 2014
Amsterdam University of Applied Sciences.
Bangor University.
Durham University.
Economic and Social Research Council.
Education Endowment Foundation.
Health and Social Care R&D Division (Northern Ireland).
Imperial College London.
Improvement Science London.
InSource Research Group.
Institute for Research and Innovation in Social Services.
JL and Associates.
King’s College London.
Manchester Business School.
NIHR Evaluation, Trials and Studies Coordinating Centre.
NIHR INVOLVE.
NIHR Trainees Coordinating Centre.
National Institute for Social Care and Health Research Academic Health Science Collaboration.
National Institute for Social Care and Health Research, Welsh Assembly Government.
North West Coast Academic Health Science Network.
Nuffield Trust.
Research in Practice.
St George’s, University of London and Kingston University.
Technology Development Group.
Teesside University.
The Health Foundation.
Universities UK.
University College London.
University of Aberdeen.
University of Leeds.
University of Oxford.
University of Southampton.
Appendix 7 The international advisory board
Dr Allan Best, Managing Director, Insource; Associate Scientist, Centre for Clinical Epidemiology and Evaluation, Vancouver Coastal Research Institute and Clinical Professor, School of Population and Public Health, University of British Columbia, BC, Canada.
Dr Jean-Louis Denis is principal investigator of the Canadian Institutes of Health Research Team Grant in Reconfiguration of Health Care Organizations and Systems and holder of the Canada Research Chair in Governance and Transformation of Health Organizations and Systems (GETOSS). He is a Full Professor at the École nationale d’administration publique (ENAP) in Montreal, QC, Canada.
Dr Jonathan Lomas (former CEO, Canadian Health Services Research Foundation).
Ms Jacqueline Tetroe (former Senior Advisor, Knowledge Translation, Canadian Institutes of Health Research).
Dr Jacomine Ravensbergen, Dean, Amsterdam University of Applied Sciences, the Netherlands.
Professor Sally Redman, CEO, Sax Institute, NSW, Australia.
Professor Thomas Rundall, Henry J. Kaiser Chair of Organised Health Systems; Director, Centre for Health Management Research, School of Public Health, University of California, Berkeley, CA, USA.
Dr Vicky Ward, Lecturer in Primary Care, University of Leeds, UK.
Appendix 8 Key observations on evaluating knowledge mobilisation approaches from the major reviews
Review | Key observations on evaluating knowledge mobilisation approaches |
---|---|
Oliver et al. 2014205 | There is still a lack of reliable empirical evidence about the actual processes and impacts of research use in policy-making |
Caswill and Lyall 2013206 | Knowledge brokerage is being promoted in contemporary policy but there is limited evidence that such roles are being widely embraced |
Ellen et al. 2013142 | Evaluation efforts are rarely reported in the literature and most evaluations examine clinical outcomes rather than the impact of evidence use on managerial decision-making or the processes of using evidence |
Oborn et al. 201330 | Models evaluating the success of KT programmes continue to focus on more linear and quantitative approaches rather than on approaches which emphasise collaboration and reciprocal exchange, despite the latter becoming increasingly common in the literature |
Pitchforth et al. 201398 | There is currently little empirical evidence on the effectiveness of different evidence-response mechanisms; evaluation is challenging but does need to be built in from the outset |
Barwick et al. 2012207 | This literature search for studies that evaluated the effectiveness of KT interventions in child and youth mental health services found that the existing studies were largely of poor quality |
Boyko et al. 201296 | Much of the existing literature on deliberative dialogues is theoretical or focuses on evaluating their procedural aspects; little is yet known about the effects of deliberative dialogues as a strategy in KTE |
Fazekas and Burns 2012125 | There is little empirical evidence on the effectiveness or impact of knowledge mediation in education |
Grimshaw et al. 201276 | The evidence on the likely effectiveness of different strategies to overcome specific barriers remains incomplete. The evidence base on the effectiveness of KT strategies focusing on policy makers and senior managers is very limited |
Holmes et al. 201278 | Systems thinking approaches to knowledge mobilisation will need to be evaluated using methods like natural experiments and case studies rather than RCTs |
Holmes et al. 2012126 | A review of 377 publications (Fixsen et al. 2005) concluded that information dissemination is not effective for implementation |
Murphy and Fafard 201297 | There is limited evidence of the comparative effectiveness of different KT strategies in clinical and health services although there are examples where approaches have led to increased use of evidence in large health care organisations or better use of guidelines |
Scott et al. 2012115 | There are few robust studies on knowledge translation with allied health professionals |
Chambers et al. 201294 | This review of the literature aimed to identify and describe existing products and approaches that systematic review producers use to bring their findings to policy-makers. Few evaluations were available. Most of these reported on perceived usefulness rather than actual use in decision-making and none assessed cost-effectiveness |
Ellen et al. 2011208 | The reviewers were unable to identify any studies evaluating the effects of a full RKI on the use of evidence by health system managers and policy makers; however, the scoping review did uncover 25 qualitative studies and one randomised control trial that addressed different components of the RKI framework |
Greenhalgh and Wieringa 201124 | There is very little research on how knowledge intermediation might be productively facilitated and supported. Research is needed to address approaches to facilitating macro-level partnerships between researchers, practitioners and commercial interests |
Pentland et al. 201180 | There is a shortage of empirical evidence on KT initiatives and no valid method for measuring the effects of KT or KE has been established. The most commonly-used approach is to develop local, individual and non-standardised measures |
Contandriopoulos et al. 201021 | The reviewers suggest that context-independent evidence on effective knowledge exchange strategies is unlikely to be found |
Bhattacharyya et al. 2009100 | The reviewers note that a Cochrane review in 2005 concluded that there is currently insufficient evidence to assess the impact of the strategy of identifying barriers and tailoring interventions Although systematic reviews of various implementation strategies provide some indication of their impact, so much depends on the interaction of provider and context that it is hard to assess the applicability of studies conducted in one context to another context One approach in the field has been to develop a series of testable hypotheses and theories to describe these interactions; however, the authors criticise this approach on the grounds that there are already multiple overlapping frameworks and theories with limited ability to predict the complex interactions in the implementation process. They argue that there is currently insufficient empirical evidence on which to build those theories and suggest that a more fruitful alternative would be to continue to build a series of detailed cases from which theoretical frameworks could eventually evolve |
Cooper et al. 2009209 | Much of the knowledge mobilisation empirical literature is based on surveys and interviews. There is a pressing need for other methods and also for research that focuses on cultures and on organisational practices rather than just on individuals; many of the initiatives described in this paper remain unevaluated |
Ward et al. 200919 | Recent reviews (Mitton et al.; Graham and Tetroe 2007) have identified 63 different theories or models of KT across fields including health care, social care and management; many of these models remain largely unrefined and untested so it is unclear how suitable they are for planning and evaluating KT strategies Graham et al.’s KTA framework has been tested and evaluated as a model for planning and evaluating KT strategies but the model was developed from planned action theories and its adequacy as an explanation of the KT process is largely unknown. In addition, it has not yet been refined or developed further following its use in practice |
Levin 200891 | The evidence base in this area is poor. For example, a review of 81 robust papers in KTE research in health (Mitton et al. 2007) found only 18 empirical studies of the effects of KTE practices; the remaining studies were analyses of barriers and constraints. The use of audit and feedback with physicians has been studied; these do have an impact but the effect is not large and consistent enough to make them mandatory The author comments that too many studies construct new frameworks instead of building on the work of others and that few studies try to measure the impact of knowledge mobilisation; many do so only in general ways rather than looking at specific research or decisions |
Nicolini et al. 2008158 | The knowledge management literature pays little attention to the question of evidence for the effectiveness of knowledge management tools |
Tetroe et al. 200811 | The evaluation of the effectiveness of KT strategies remains a methodological challenge There has been relatively little empirical research on the actual or potential KT roles, responsibilities and activities of the different actors There is a limited evidence base for KT, compounded by the problem of agreeing outcomes for evaluation and the methodological challenges of designing rigorous studies to test KT strategies ‘emphasis on evaluation will be needed to discover which KT strategies are effective for both agencies and researchers . . . The agencies may also benefit from opportunities to examine what other agencies are doing in this important area‘ (pp. 151–2) |
Graham and Tetroe 200773 | This review found 31 planned action theories (models/frameworks) published in the period 1983–2006. Most (19/31) of the theories or frameworks had not yet been tested empirically |
Mitton et al. 200718 | The review examined and summarised current evidence for KTE in health policy; it found that only about 20% of the studies reported on a real-world application of a KTE strategy and fewer had been formally evaluated The authors concluded that more formal and rigorous research is needed to assess and evaluate the success of KTE strategies in different contexts and suggested that it may be more beneficial to conduct an evaluation on whether and how policy was informed rather than simply the extent to which research was used |
Sudsawad 200790 | Few studies on the effectiveness of KT strategies report strategies involving knowledge creation (as opposed to dissemination and implementation of existing knowledge). There are some key reviews of strategies aimed at changing health professionals’ behaviour The author comments that as knowledge use is not a single discrete event, evaluating knowledge use requires a multidimensional and systematic approach |
Estabrooks et al. 200672 | The PARIHS Framework is both intuitively appealing and flexible. However, with the exception of the facilitation component, it lacks detail and, like other models and frameworks, it has not been comprehensively tested |
Lavis et al. 200610 | Many of the interventions that are intended to influence the use of research have been promoted but have not yet been evaluated |
Fixsen et al. 200533 | Information dissemination alone and training by itself are both widely used as methods of implementing policies and programmes despite best evidence showing that these methods are ineffective. What is needed is a longer-term multilevel approach using a range of mechanisms including practitioner selection, coaching and skill-based training The best evidence available shows that information dissemination alone is an ineffective implementation methods and that training by itself is also ineffective – and yet these two have been the most widely used ways of attempting implementation of policies and programmes There is little evidence on the impact of organisational and system influences on implementation or on their mechanisms of influence. More data are needed, especially as those involved in large-scale implementation report that these factors are very influential There is a gap in the literature about how the different factors interact and about their relative influences over time |
Walter et al. 200593 | This cross-sector review of 93 empirical articles from 1990 onwards and four large-scale multisite initiatives found that the articles rarely addressed or theorised what was meant by research use and that a variety of forms of research use were measured using a range of subjective and objective methods The gaps in the evidence base include non-instrumental research use; key features of successful initiatives; evidence from outside health care; details of the format, content and implementation of interventions; and evaluations of the vast majority of initiatives being used to promote research use |
Greenhalgh et al. 200431 | The literature on diffusion of innovations is problematic: There is little evidence to support the widely-cited concept of ‘adopter traits’ Much of the literature concentrates on product-based innovations, from which the lessons are not readily transferable to complex processes in service settings Most studies ignore the issue of the sustainability of innovations Most studies focus on formal innovations disseminated from the centre rather than informal, emergent innovations ‘The multiple (and often unpredictable) interactions that arise in particular contexts and settings are precisely what determine the success or failure of a dissemination initiative’ (p. 615) |
Grimshaw and Eccles 2004210 | The paper summarises findings from a systematic review by Grinshaw et al. of 235 rigorous evaluations of different guideline dissemination and implementation strategies published up to 1998 The median effect size across all studies showed an absolute improvement of about 10% in process of care indicators. The authors comment that this is a modest improvement but that it could be important across whole populations They note that multifaceted interventions did not appear to be more effective than single interventions and that few of the studies were based on a theoretical model to guide the choice of intervention |
Hemsley-Brown 2004211 | There is little empirical research evidence to show which strategies are effective in increasing research use by public sector managers or practitioners |
Walter et al. 200460 | The review found that there is little evidence of the effectiveness of the three models of research use in social care (i.e. research-based practitioner, embedded research, organisational excellence) and little evidence about potential barriers and enablers to their development It is not possible to endorse one model to the exclusion of the others; further work is required to develop a whole-systems approach There are few robust studies of what works in promoting research use in social care and the majority of studies focus solely on the professionally qualified workforce |
Lavis et al. 200355 | The authors advise that the type of research use (e.g. instrumental, conceptual or symbolic) should be considered when measuring outcomes and that performance measures for knowledge translation should be appropriate to the target audience and objectives. They suggest that a reasonable ambition for research organisations may be to know whether or not the research they produce is having an impact on decision-making; more detailed assessment of improved performance or changes in health outcomes may be best left to standalone research projects The authors report that their survey in 2001 of 265 directors of applied research organisations in Canada (134 health research, 131 economic/social research) found that only around 1 in 10 organisations did any kind of evaluative activity of their knowledge translation activities (e.g. impact on awareness, knowledge, attitudes, reported or actual behaviour) |
Walter et al. 2003145 | There is limited evidence on what makes for effective research impact, although the literature does point to a number of practices that seem to be useful Literature from the health-care sector predominates and there is an emphasis on changing behaviour in practice settings rather than on more conceptual uses of research or on evidence use in policy settings. There may be scope for cross-sectoral learning as different sectors share common barriers to research use |
List of abbreviations
- AEI
- Australian Education Index
- ASSIA
- Applied Social Sciences Index and Abstracts
- BEI
- British Education Index
- CES
- Centre for Effective Services
- CFIR
- Consolidated Framework for Implementation Research
- CIHR
- Canadian Institutes of Health Research
- CLAHRC
- Collaboration for Leadership in Applied Health Research and Care
- CMO
- context–mechanism–outcome
- CRARUM
- Critical Realism and the Arts Research Utilization Model
- CRFR
- Centre for Research on Families and Relationships
- EEF
- Education Endowment Foundation
- EPOC
- Effective Practice and Organisation of Care
- EPPI-Centre
- Evidence for Policy and Practice Information and Co-ordinating Centre
- ERIC
- Education Resources Information Center
- ESRC
- Economic and Social Research Council
- HSDR
- Health Services and Delivery Research
- IBSS
- International Bibliography of the Social Sciences
- IHI
- Institute for Healthcare Improvement
- IT
- information technology
- KTA
- Knowledge to Action
- MSFHR
- Michael Smith Foundation for Health Research
- NCCDPHP
- National Center for Chronic Disease Prevention and Health Promotion
- NCCSC
- National Institute for Health and Care Excellence Collaborating Centre for Social Care
- NICE
- National Institute for Health and Care Excellence
- NIHR
- National Institute for Health Research
- OMRU
- Ottawa Model of Research Use
- PARIHS
- Promoting Action on Research Implementation in Health Services
- PDSA
- Plan-Do-Study-Act
- PiiAF
- Public Involvement Impact Assessment Framework
- PPI
- patient and public involvement
- RE-AIM
- Reach, Effectiveness, Adoption, Implementation, Maintenance
- RQ
- research question
- SATORI
- knowledge translation self-assessment tool for research institutes
- SCIE
- Social Care Institute for Excellence
- SSCI
- Social Sciences Citation Index
- ZonMw
- The Netherlands Organisation for Health Research and Development