949 resultados para average of mutual information (AMI)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction The medicines use review (MUR), a new community pharmacy ‘service’, was launched in England and Wales to improve patients’ knowledge and use of medicines through a private, patient–pharmacist appointment. After 18 months, only 30% of pharmacies are providing MURs; at an average of 120 per annum (maximum 400 allowed).1 One reason linked to low delivery is patient recruitment.2 Our aim was to examine how the MUR is symbolised and given meaning via printed patient information, and potential implications. Method The language of 10 MUR patient leaflets, including the NHS booklet,3 and leaflets from multiples and wholesalers was evaluated by discourse analysis. Results and Discussion Before experiencing MURs, patients conceivably ‘categorise’ relationships with pharmacists based on traditional interactions.4 Yet none of the leaflets explicitly describe the MUR as ‘new’ and presuppose patients would become involved in activities outside of their pre-existing relationship with pharmacists such as appointments, self-completion of charts, and pharmacy action plans. The MUR process is described inconsistently, with interchangeable use of formal (‘review meeting‘) and informal (‘friendly’) terminology, the latter presumably to portray an intended ‘negotiation model’ of interaction.5 Assumptions exist about attitudes (‘not understanding’; ‘problems’) that might lead patients to an appointment. However, research has identified a multitude of reasons why patients choose (or not) to consult practitioners,6 and marketing of MURs should also consider other barriers. For example, it may be prudent to remove time limits to avoid implying patients might not be listened to fully, during what is for them an additional practitioner consultation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper develops a measure of consumer welfare losses associated with withholding it formation about a possible link between BSE and vCJD. The Cost of Ignorance (COI) is measured by comparing the utility of the informed choice with the utility of the uninformed choice, under conditions of improved information. Unlike previous work that is largely based on a single equation demand model, the measure is obtained retrieving a cost,function from a dynamic Almost Ideal Demand System. The estimated perceived loss for Italian consumers due to delayed information ranges from 12 percent to 54 percent of total meat expenditure, depending on the month assumed to embody correct beliefs about the safety level of beef.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A survey was carried out on 55 commercial dairy farms located in the South of Chile during 1995-97. A questionnaire was developed to obtain informed estimates of dairy effluent management on those farms. Information was analysed on an annual basis using a computer spreadsheet linking all the parameters surveyed. In addition, slurry samples were taken for analysis of dry matter content (DM). Herd size varied between 50 and 800 cows per farm. A large proportion of the total volume of effluents produced came from rainfall (46%), dirty water accounted for 29% with only 25% from cow's faeces and urine. The large volume of effluents produced resulted in a reduced storage capacity (on average of 2 months) or more frequent and higher application rates to the field. Only 37% of the farmers knew the application rates of manure and there was a wide range in the quantity used per year (12 m(3)/ha to 300 m(3)/ha). Dairy effluents were applied mainly on grass (71%) throughout the year but, mostly concentrated during the winter and spring time using only surface irrigation system. The total solids contents of effluents was very low, with 62% of the samples being <4% DM. This reflected the large volumes of clean water that the storage tanks received. The information collected has identified problems in effluent management in Chilean dairy farms where research and technology transfer will be necessary to avoid pollution problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Pseudomonas fluorescens are common soil bacteria that can improve plant health through nutrient cycling, pathogen antagonism and induction of plant defenses. The genome sequences of strains SBW25 and Pf0-1 were determined and compared to each other and with P. fluorescens Pf-5. A functional genomic in vivo expression technology (IVET) screen provided insight into genes used by P. fluorescens in its natural environment and an improved understanding of the ecological significance of diversity within this species. Results: Comparisons of three P. fluorescens genomes (SBW25, Pf0-1, Pf-5) revealed considerable divergence: 61% of genes are shared, the majority located near the replication origin. Phylogenetic and average amino acid identity analyses showed a low overall relationship. A functional screen of SBW25 defined 125 plant-induced genes including a range of functions specific to the plant environment. Orthologues of 83 of these exist in Pf0-1 and Pf-5, with 73 shared by both strains. The P. fluorescens genomes carry numerous complex repetitive DNA sequences, some resembling Miniature Inverted-repeat Transposable Elements (MITEs). In SBW25, repeat density and distribution revealed 'repeat deserts' lacking repeats, covering approximately 40% of the genome. Conclusions: P. fluorescens genomes are highly diverse. Strain-specific regions around the replication terminus suggest genome compartmentalization. The genomic heterogeneity among the three strains is reminiscent of a species complex rather than a single species. That 42% of plant-inducible genes were not shared by all strains reinforces this conclusion and shows that ecological success requires specialized and core functions. The diversity also indicates the significant size of genetic information within the Pseudomonas pan genome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the effects of providing two different types of written information about medicine benefits in a patient information leaflet (PIL). Setting: Participants were 358 adult volunteers from the general population recruited from a London railway station and central Reading. Method: The study used a controlled empirical methodology in which people were given a hypothetical, but realistic, scenario about visiting their doctor and being prescribed medication. They then read an information leaflet about the medicine that contained neither, one, or both benefit statements, and finally completed a number of Likert rating scales. Outcome measures included perceived satisfaction and helpfulness of the information, effectiveness and appropriateness of the medicine, benefit and risk to health, and intention to comply. Key findings: Both types of benefit information led to significantly higher ratings on all of the measures taken. Conclusions: Provision of a relatively short ‘benefit’ statement can significantly improve people’s judgements and intention to take a medicine. The findings are important and timely as the European Union is currently considering reviewing their regulations to allow for the inclusion of limited non-promotional benefit information in PILs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three experiments examined the effects of adding information about medication benefits to a short written explanation about a medicine. Participants were presented with a fictitious scenario about visiting the doctor, being prescribed an antibiotic and being given information about the medicine. They were asked to make various judgements relating to the information, the medicine and their intention to take it. Experiment 1 found that information about benefits enhanced the judgements, but did not influence the intention to comply. Experiment 2 compared the relative effectiveness of two different forms of the benefit statement, and found that both were effective in improving judgements, but had no effect on intention to comply. Experiment 3 compared the effectiveness of the two forms of benefit information but participants were told that the medicine was associated with four named side effects. Both types of statement improved ratings of the intention to comply, as well as ratings on the other measures. The experiments provide fairly consistent support for the inclusion of benefit information in medicine information leaflets, particularly to balance concerns about side effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two experiments, using a controlled empirical methodology, investigated the effects of presenting information about medicines using a more personalised style of expression. In both studies, members of the general public were given a hypothetical scenario about visiting the doctor, being diagnosed with a particular illness, and being prescribed a medication. They were also given a written explanation about the medicine and were asked to provide ratings on a number of measures, including satisfaction, perceived risk to health, and intention to comply. In Experiment 1 the explanation focused only on possible side effects of the medicine, whereas in Experiment 2 a fuller explanation was provided, which included information about the illness, prescribed drug, its dosage and contraindications as well as its side effects. In both studies, use of a more personalised style resulted in significantly higher ratings of satisfaction and significantly lower ratings of likelihood of side effects occurring and of perceived risk to health. In Experiment 2 it also led to significantly improved recall for the written information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To assess the effectiveness of absolute risk, relative risk, and number needed to harm formats for medicine side effects, with and without the provision of baseline risk information. Methods: A two factor, risk increase format (relative, absolute and NNH) x baseline (present/absent) between participants design was used. A sample of 268 women was given a scenario about increase in side effect risk with third generation oral contraceptives, and were required to answer written questions to assess their understanding, satisfaction, and likelihood of continuing to take the drug. Results: Provision of baseline information significantly improved risk estimates and increased satisfaction, although the estimates were still considerably higher than the actual risk. No differences between presentation formats were observed when baseline information was presented. Without baseline information, absolute risk led to the most accurate performance. Conclusion: The findings support the importance of informing people about baseline level of risk when describing risk increases. In contrast, they offer no support for using number needed to harm. Practice implications: Health professionals should provide baseline risk information when presenting information about risk increases or decreases. More research is needed before numbers needed to harm (or treat) should be given to members of the general populations. (c) 2005 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga et al. [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga et al. in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga et al. in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The knowledge economy offers opportunity to a broad and diverse community of information systems users to efficiently gain information and know-how for improving qualifications and enhancing productivity in the work place. Such demand will continue and users will frequently require optimised and personalised information content. The advancement of information technology and the wide dissemination of information endorse individual users when constructing new knowledge from their experience in the real-world context. However, a design of personalised information provision is challenging because users’ requirements and information provision specifications are complex in their representation. The existing methods are not able to effectively support this analysis process. This paper presents a mechanism which can holistically facilitate customisation of information provision based on individual users’ goals, level of knowledge and cognitive styles preferences. An ontology model with embedded norms represents the domain knowledge of information provision in a specific context where users’ needs can be articulated and represented in a user profile. These formal requirements can then be transformed onto information provision specifications which are used to discover suitable information content from repositories and pedagogically organise the selected content to meet the users’ needs. The method is provided with adaptability which enables an appropriate response to changes in users’ requirements during the process of acquiring knowledge and skills.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The volume–volatility relationship during the dissemination stages of information flow is examined by analyzing various theories relating volume and volatility as complementary rather than competing models. The mixture of distributions hypothesis, sequential arrival of information hypothesis, the dispersion of beliefs hypothesis, and the noise trader hypothesis all add to the understanding of how volume and volatility interact for different types of futures traders. An integrated picture of the volume–volatility relationship is provided by investigating the dynamic linear and nonlinear associations between volatility and the volume of informed (institutional) and uninformed (the general public) traders. In particular, the trading behavior explanation for the persistence of futures volatility, the effect of the timing of private information arrival, and the response of institutional traders to excess noise trading risk is examined