979 resultados para Information finding
Not published, not indexed: issues in generating and finding hospice and palliative care literature.
Resumo:
INTRODUCTION: Accessing new knowledge as the evidence base for hospice and palliative care grows has specific challenges for the discipline. This study aimed to describe conversion rates of palliative and hospice care conference abstracts to journal articles and to highlight that some palliative care literature may not be retrievable because it is not indexed on bibliographic databases. METHODS: Substudy A tracked the journal publication of conference abstracts selected for inclusion in a gray literature database on www.caresearch.com.au . Abstracts were included in the gray literature database following handsearching of proceedings of over 100 Australian conferences likely to have some hospice or palliative care content that were held between 1980 and 1999. Substudy B looked at indexing from first publication until 2001 of three international hospice and palliative care journals in four widely available bibliographic databases through systematic tracing of all original papers in the journals. RESULTS: Substudy A showed that for the 1338 abstracts identified only 15.9% were published (compared to an average in health of 45%). Published abstracts were found in 78 different journals. Multiauthor abstracts and oral presentations had higher rates of conversion. Substudy B demonstrated lag time between first publication and bibliographic indexing. Even after listing, idiosyncratic noninclusions were identified. DISCUSSION: There are limitations to retrieval of all possible literature through electronic searching of bibliographic databases. Encouraging publication in indexed journals of studies presented at conferences, promoting selection of palliative care journals for database indexing, and searching more than one bibliographic database will improve the accessibility of existing and new knowledge in hospice and palliative care.
Resumo:
Gemstone Team FASTR (Finding Alternative Specialized Travel Routes)
Resumo:
While technologies for genetic sequencing have increased the promise of personalized medicine, they simultaneously pose threats to personal privacy. The public’s desire to protect itself from unauthorized access to information may limit the uses of this valuable resource. To date, there is limited understanding about the public’s attitudes toward the regulation and sharing of such information. We sought to understand the drivers of individuals’ decisions to disclose genetic information to a third party in a setting where disclosure potentially creates both private and social benefits, but also carries the risk of potential misuse of private information. We conducted two separate but related studies. First, we administered surveys to college students and parents, to determine individual attitudes toward and inter-generational influences on the disclosure decision. Second, we conducted a game-theory based experiment that assessed how participants’ decisions to disclose genetic information are influenced by societal and health factors. Key survey findings indicate that concerns about genetic information privacy negatively impact the likelihood of disclosure while the perceived benefits of disclosure and trust in the institution receiving the information have a positive influence. The experiment results also show that the risk of discrimination negatively affects the likelihood of disclosure, while the positive impact that disclosure has on the probability of finding a cure and the presence of a monetary incentive to disclose, increase the likelihood. We also study the determinants of individuals’ decision to be informed of findings about their health, and how information about health status is used for financial decisions.
Resumo:
We study information rates of time-varying flat-fading channels (FFC) modeled as finite-state Markov channels (FSMC). FSMCs have two main applications for FFCs: modeling channel error bursts and decoding at the receiver. Our main finding in the first application is that receiver observation noise can more adversely affect higher-order FSMCs than lower-order FSMCs, resulting in lower capacities. This is despite the fact that the underlying higher-order FFC and its corresponding FSMC are more predictable. Numerical analysis shows that at low to medium SNR conditions (SNR lsim 12 dB) and at medium to fast normalized fading rates (0.01 lsim fDT lsim 0.10), FSMC information rates are non-increasing functions of memory order. We conclude that BERs obtained by low-order FSMC modeling can provide optimistic results. To explain the capacity behavior, we present a methodology that enables analytical comparison of FSMC capacities with different memory orders. We establish sufficient conditions that predict higher/lower capacity of a reduced-order FSMC, compared to its original high-order FSMC counterpart. Finally, we investigate the achievable information rates in FSMC-based receivers for FFCs. We observe that high-order FSMC modeling at the receiver side results in a negligible information rate increase for normalized fading rates fDT lsim 0.01.
Resumo:
Background: Results from clinical trials are usually summarized in the form of sampling distributions. When full information (mean, SEM) about these distributions is given, performing meta-analysis is straightforward. However, when some of the sampling distributions only have mean values, a challenging issue is to decide how to use such distributions in meta-analysis. Currently, the most common approaches are either ignoring such trials or for each trial with a missing SEM, finding a similar trial and taking its SEM value as the missing SEM. Both approaches have drawbacks. As an alternative, this paper develops and tests two new methods, the first being the prognostic method and the second being the interval method, to estimate any missing SEMs from a set of sampling distributions with full information. A merging method is also proposed to handle clinical trials with partial information to simulate meta-analysis.
Resumo:
Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.
This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.
Resumo:
This paper examines the occurrence and fragility of information cascades in two laboratory experiments. One group of low informed participants sequentially guess which of two states has been randomly chosen. In a matched pairs design, another group of high informed participants make similar guesses after having observed the guesses of the low informed participants. In the second experiment, participants' beliefs about the chosen state are elicited. In equilibrium, low informed players who observe an established pattern of identical guesses herd without regard to their private information whereas high informed players always guess according to their private information. Equilibrium behavior implies that information cascades emerge in the group of low informed participants, the belief based solely on cascade guesses is stationary, and information cascades are systematically broken by high informed participants endowed with private information contradicting the cascade guesses. Experimental results show that the behavior of low informed participants is qualitatively in line with the equilibrium prediction. Information cascades often emerge in our experiments. The tendency of low informed participants to engage in cascade behavior increases with the number of identical guesses. Our main finding is that information cascades are not fragile. The behavior of high informed participants differs markedly from the equilibrium prediction. Only one-third of laboratory cascades are broken by high informed participants endowed with private information contradicting the cascade guesses. The relative frequency of cascade breaks is 15% for the situations where five or more identical guesses are observed. Participants' elicited beliefs are strongly consistent with their own behavior and show that, unlike in equilibrium, the more cascade guesses participants observe the more they believe in the state favored by those guesses.
Resumo:
This paper reports laboratory experiments designed to study the impact of public information about past departure rates on congestion levels and travel costs. Our design is based on a discrete version of Arnott et al.'s (1990) bottleneck model. In all treatments, congestion occurs and the observed travel costs are quite similar to the predicted ones. Subjects' capacity to coordinate is not affected by the availability of public information on past departure rates, by the number of drivers or by the relative cost of delay. This seemingly absence of treatment effects is confirmed by our finding that a parameter-free reinforcement learning model best characterises individual behaviour.
Resumo:
Purpose
– Traditionally, most studies focus on institutionalized management-driven actors to understand technology management innovation. The purpose of this paper is to argue that there is a need for research to study the nature and role of dissident non-institutionalized actors’ (i.e. outsourced web designers and rapid application software developers). The authors propose that through online social knowledge sharing, non-institutionalized actors’ solution-finding tensions enable technology management innovation.
Design/methodology/approach
– A synthesis of the literature and an analysis of the data (21 interviews) provided insights in three areas of solution-finding tensions enabling management innovation. The authors frame the analysis on the peripherally deviant work and the nature of the ways that dissident non-institutionalized actors deviate from their clients (understood as the firm) original contracted objectives.
Findings
– The findings provide insights into the productive role of solution-finding tensions in enabling opportunities for management service innovation. Furthermore, deviant practices that leverage non-institutionalized actors’ online social knowledge to fulfill customers’ requirements are not interpreted negatively, but as a positive willingness to proactively explore alternative paths.
Research limitations/implications
– The findings demonstrate the importance of dissident non-institutionalized actors in technology management innovation. However, this work is based on a single country (USA) and additional research is needed to validate and generalize the findings in other cultural and institutional settings.
Originality/value
– This paper provides new insights into the perceptions of dissident non-institutionalized actors in the practice of IT managerial decision making. The work departs from, but also extends, the previous literature, demonstrating that peripherally deviant work in solution-finding practice creates tensions, enabling management innovation between IT providers and users.
Resumo:
Many graph datasets are labelled with discrete and numeric attributes. Most frequent substructure discovery algorithms ignore numeric attributes; in this paper we show how they can be used to improve search performance and discrimination. Our thesis is that the most descriptive substructures are those which are normative both in terms of their structure and in terms of their numeric values. We explore the relationship between graph structure and the distribution of attribute values and propose an outlier-detection step, which is used as a constraint during substructure discovery. By pruning anomalous vertices and edges, more weight is given to the most descriptive substructures. Our method is applicable to multi-dimensional numeric attributes; we outline how it can be extended for high-dimensional data. We support our findings with experiments on transaction graphs and single large graphs from the domains of physical building security and digital forensics, measuring the effect on runtime, memory requirements and coverage of discovered patterns, relative to the unconstrained approach.
Resumo:
When a user of a microblogging site authors a microblog
post or browses through a microblog post, it provides cues as to what
topic she is interested in at that point in time. Example-based search
that retrieves similar tweets given one exemplary tweet, such as the one
just authored, can help provide the user with relevant content. We investigate
various components of microblog posts, such as the associated
timestamp, author’s social network, and the content of the post, and
develop approaches that harness such factors in finding relevant tweets
given a query tweet. An empirical analysis of such techniques on real
world twitter-data is then presented to quantify the utility of the various
factors in assessing tweet relevance. We observe that content-wise similar
tweets that also contain extra information not already present in the
query, are perceived as useful. We then develop a composite technique
that combines the various approaches by scoring tweets using a dynamic
query-specific linear combination of separate techniques. An empirical
evaluation establishes the effectiveness of the composite technique, and
that it outperforms each of its constituents.
Resumo:
There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.
Resumo:
LaFond and Watts (2008) provide evidence that information asymmetry might be a determinant of accounting conservatism. One implication of their paper is that regulators trying to reduce information asymmetry by lowering the level of accounting conservatism might be wrong. However, there is a trend in moving away from conservative accounting. The typical example is IFRS adoption. Therefore, this paper studies information asymmetry and accounting conservatism under IFRS adoption. The results show that the level of accounting conservatism decreases after mandatory IFRS adoption, but the adoption of IFRS is likely to weaken the relationship between information asymmetry and accounting conservatism. Moreover, this paper investigates how the change of accounting conservatism under IFRS is related to the change in information environment. The finding shows that accounting conservatism increases information environment, supporting the idea that, by providing comparatively credible information, conservative accounting is beneficial to the information environment.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.
Resumo:
Les moteurs de recherche font partie de notre vie quotidienne. Actuellement, plus d’un tiers de la population mondiale utilise l’Internet. Les moteurs de recherche leur permettent de trouver rapidement les informations ou les produits qu'ils veulent. La recherche d'information (IR) est le fondement de moteurs de recherche modernes. Les approches traditionnelles de recherche d'information supposent que les termes d'indexation sont indépendants. Pourtant, les termes qui apparaissent dans le même contexte sont souvent dépendants. L’absence de la prise en compte de ces dépendances est une des causes de l’introduction de bruit dans le résultat (résultat non pertinents). Certaines études ont proposé d’intégrer certains types de dépendance, tels que la proximité, la cooccurrence, la contiguïté et de la dépendance grammaticale. Dans la plupart des cas, les modèles de dépendance sont construits séparément et ensuite combinés avec le modèle traditionnel de mots avec une importance constante. Par conséquent, ils ne peuvent pas capturer correctement la dépendance variable et la force de dépendance. Par exemple, la dépendance entre les mots adjacents "Black Friday" est plus importante que celle entre les mots "road constructions". Dans cette thèse, nous étudions différentes approches pour capturer les relations des termes et de leurs forces de dépendance. Nous avons proposé des méthodes suivantes: ─ Nous réexaminons l'approche de combinaison en utilisant différentes unités d'indexation pour la RI monolingue en chinois et la RI translinguistique entre anglais et chinois. En plus d’utiliser des mots, nous étudions la possibilité d'utiliser bi-gramme et uni-gramme comme unité de traduction pour le chinois. Plusieurs modèles de traduction sont construits pour traduire des mots anglais en uni-grammes, bi-grammes et mots chinois avec un corpus parallèle. Une requête en anglais est ensuite traduite de plusieurs façons, et un score classement est produit avec chaque traduction. Le score final de classement combine tous ces types de traduction. Nous considérons la dépendance entre les termes en utilisant la théorie d’évidence de Dempster-Shafer. Une occurrence d'un fragment de texte (de plusieurs mots) dans un document est considérée comme représentant l'ensemble de tous les termes constituants. La probabilité est assignée à un tel ensemble de termes plutôt qu’a chaque terme individuel. Au moment d’évaluation de requête, cette probabilité est redistribuée aux termes de la requête si ces derniers sont différents. Cette approche nous permet d'intégrer les relations de dépendance entre les termes. Nous proposons un modèle discriminant pour intégrer les différentes types de dépendance selon leur force et leur utilité pour la RI. Notamment, nous considérons la dépendance de contiguïté et de cooccurrence à de différentes distances, c’est-à-dire les bi-grammes et les paires de termes dans une fenêtre de 2, 4, 8 et 16 mots. Le poids d’un bi-gramme ou d’une paire de termes dépendants est déterminé selon un ensemble des caractères, en utilisant la régression SVM. Toutes les méthodes proposées sont évaluées sur plusieurs collections en anglais et/ou chinois, et les résultats expérimentaux montrent que ces méthodes produisent des améliorations substantielles sur l'état de l'art.