917 resultados para ambiguous zeroes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Geschichte der Humangenetik stellte lange Zeit ein vernachlässigtes Gebiet der medizin- und wissenschaftshistorischen Forschung dar. Erst in jüngster Vergangenheit sind einige historische Arbeiten erschienen, die sich der Geschichte dieses medizinischen Forschungs- und Praxisfeldes widmen. Eine wichtige Forschungsfrage betrifft die Beziehung der Humangenetik zur Eugenik. Der vorliegende Beitrag greift diese Frage auf und zeigt anhand eines Schweizer Fallbeispiels zur Vererbung des Kropfes, dass zwischen der Humangenetik und der Eugenik im 20. Jahrhundert enge, aber auch widersprüchliche Beziehungen bestanden: Ergebnisse aus Vererbungsstudien widersprachen nicht selten eugenischen Postulaten, zugleich konnten aber dieselben humangenetischen Untersuchungen Visionen einer erbbiologischen Bevölkerungsüberwachung befeuern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: The biased interpretation of ambiguous social situations is considered a maintaining factor of Social Anxiety Disorder (SAD). Studies on the modification of interpretation bias have shown promising results in laboratory settings. The present study aims at pilot-testing an Internet-based training that targets interpretation and judgmental bias. METHOD: Thirty-nine individuals meeting diagnostic criteria for SAD participated in an 8-week, unguided program. Participants were presented with ambiguous social situations, were asked to choose between neutral, positive, and negative interpretations, and were required to evaluate costs of potential negative outcomes. Participants received elaborate automated feedback on their interpretations and judgments. RESULTS: There was a pre-to-post-reduction of the targeted cognitive processing biases (d = 0.57-0.77) and of social anxiety symptoms (d = 0.87). Furthermore, results showed changes in depression and general psychopathology (d = 0.47-0.75). Decreases in cognitive biases and symptom changes did not correlate. The results held stable accounting for drop-outs (26%) and over a 6-week follow-up period. Forty-five percent of the completer sample showed clinical significant change and almost half of the participants (48%) no longer met diagnostic criteria for SAD. LIMITATIONS: As the study lacks a control group, results lend only preliminary support to the efficacy of the intervention. Furthermore, the mechanism of change remained unclear. CONCLUSION: First results promise a beneficial effect of the program for SAD patients. The treatment proved to be feasible and acceptable. Future research should evaluate the intervention in a randomized-controlled setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cognitive processes are influenced by underlying affective states, and tests of cognitive bias have recently been developed to assess the valence of affective states in animals. These tests are based on the fact that individuals in a negative affective state interpret ambiguous stimuli more pessimistically than individuals in a more positive state. Using two strains of mice we explored whether unpredictable chronic mild stress (UCMS) can induce a negative judgement bias and whether variation in the expression of stereotypic behaviour is associated with variation in judgement bias. Sixteen female CD-1 and 16 female C57BL/6 mice were trained on a tactile conditional discrimination test with grade of sandpaper as a cue for differential food rewards. Once they had learned the discrimination, half of the mice were subjected to UCMS for three weeks to induce a negative affective state. Although UCMS induced a reduced preference for the higher value reward in the judgement bias test, it did not affect saccharine preference or hypothalamic–pituitary–adrenal (HPA) activity. However, UCMS affected responses to ambiguous (intermediate) cues in the judgement bias test. While control mice showed a graded response to ambiguous cues, UCMS mice of both strains did not discriminate between ambiguous cues and tended to show shorter latencies to the ambiguous cues and the negative reference cue. UCMS also increased bar-mouthing in CD-1, but not in C57BL/6 mice. Furthermore, mice with higher levels of stereotypic behaviour made more optimistic choices in the judgement bias test. However, no such relationship was found for stereotypic bar-mouthing, highlighting the importance of investigating different types of stereotypic behaviour separately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Behavioural tests to assess affective states are widely used in human research and have recently been extended to animals. These tests assume that affective state influences cognitive processing, and that animals in a negative affective state interpret ambiguous information as expecting a negative outcome (displaying a negative cognitive bias). Most of these tests however, require long discrimination training. The aim of the study was to validate an exploration based cognitive bias test, using two different handling methods, as previous studies have shown that standard tail handling of mice increases physiological and behavioural measures of anxiety compared to cupped handling. Therefore, we hypothesised that tail handled mice would display a negative cognitive bias. We handled 28 female CD-1 mice for 16 weeks using either tail handling or cupped handling. The mice were then trained in an eight arm radial maze, where two adjacent arms predicted a positive outcome (darkness and food), while the two opposite arms predicted a negative outcome (no food, white noise and light). After six days of training, the mice were also given access to the four previously unavailable intermediate ambiguous arms of the radial maze and tested for cognitive bias. We were unable to validate this test, as mice from both handling groups displayed a similar pattern of exploration. Furthermore, we examined whether maze exploration is affected by the expression of stereotypic behaviour in the home cage. Mice with higher levels of stereotypic behaviour spent more time in positive arms and avoided ambiguous arms, displaying a negative cognitive bias. While this test needs further validation, our results indicate that it may allow the assessment of affective state in mice with minimal training— a major confound in current cognitive bias paradigms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most monetary models make use of the quantity theory of money along with a Phillips curve. This implies a strong correlation between money growth and output in the short run (with little or no correlation between money and prices) and a strong long run correlation between money growth and inflation and inflation (with little or no correlation between money growth and output). The empirical evidence between money and inflation is very robust, but the long run money/output relationship is ambiguous at best. This paper attempts to explain this by looking at the impact of money growth on firm financing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Under the Clean Air Act, Congress granted discretionary decision making authority to the Administrator of the Environmental Protection Agency (EPA). This discretionary authority involves setting standards to protect the public's health with an "adequate margin of safety" based on current scientific knowledge. The Administrator of the EPA is usually not a scientist, and for the National Ambient Air Quality Standard (NAAQS) for particulate matter (PM), the Administrator faced the task of revising a standard when several scientific factors were ambiguous. These factors included: (1) no identifiable threshold below which health effects are not manifested, (2) no biological basis to explain the reported associations between particulate matter and adverse health effects, and (3) no consensus among the members of the Clean Air Scientific Advisory Committee (CASAC) as to what an appropriate PM indicator, averaging period, or value would be for the revised standard. ^ This project recommends and demonstrates a tool, integrated assessment (IA), to aid the Administrator in making a public health policy decision in the face of ambiguous scientific factors. IA is an interdisciplinary approach to decision making that has been used to deal with complex issues involving many uncertainties, particularly climate change analyses. Two IA approaches are presented; a rough set analysis by which the expertise of CASAC members can be better utilized, and a flag model for incorporating the views of stakeholders into the standard setting process. ^ The rough set analysis can describe minimal and maximal conditions about the current science pertaining to PM and health effects. Similarly, a flag model can evaluate agreement or lack of agreement by various stakeholder groups to the proposed standard in the PM review process. ^ The use of these IA tools will enable the Administrator to (1) complete the NAAQS review in a manner that is in closer compliance with the Clean Air Act, (2) expand the input from CASAC, (3) take into consideration the views of the stakeholders, and (4) retain discretionary decision making authority. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last several decades traditional community health indicators have become ambiguous and lost some of their relevance. During this same period national and international health agencies adopted new expanded definitions of Health that include underlying social determinants. These two influences are responsible for a proliferation of new health indicators and many are constructed from a combination of older mortality measures and available information on morbidity. Problems inherent in attempting to combine these sources of information have produced a situation where some indicators are difficult to calculate at the national level and may not function at all for small communities. What is needed is a relevant measure of the burden of ill health appropriate for smaller populations that is accessible to local health planners. ^ Death records are still the best available population health information. In Europe the burden of health problems is often portrayed using 'premature' death. Health agencies in the United States have moved to adopt Years of Potential Life Lost. Both these regions are also developing systems of 'avoidable' or 'preventable' death as health indicators. This research proposes a method combining these methodologies to produce a relevant indicator portraying the burden of ill health in communities. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the processes involved in rational patient targeting may be obvious for certain services, for others, both the appropriate sub-populations to receive services and the procedures to be used for their identification may be unclear. This project was designed to address several research questions which arise in the attempt to deliver appropriate services to specific populations. The related difficulties are particularly evident for those interventions about which findings regarding effectiveness are conflicting. When an intervention clearly is not beneficial (or is dangerous) to a large, diverse population, consensus regarding withholding the intervention from dissemination can easily be reached. When findings are ambiguous, however, conclusions may be impossible.^ When characteristics of patients likely to benefit from an intervention are not obvious, and when the intervention is not significantly invasive or dangerous, the strategy proposed herein may be used to identify specific characteristics of sub-populations which may benefit from the intervention. The identification of these populations may be used both in further informing decisions regarding distribution of the intervention and for purposes of planning implementation of the intervention by identifying specific target populations for service delivery.^ This project explores a method for identifying such sub-populations through the use of related datasets generated from clinical trials conducted to test the effectiveness of an intervention. The method is specified in detail and tested using the example intervention of case management for outpatient treatment of populations with chronic mental illness. These analyses were applied in order to identify any characteristics which distinguish specific sub-populations who are more likely to benefit from case management service, despite conflicting findings regarding its effectiveness for the aggregate population, as reported in the body of related research. However, in addition to a limited set of characteristics associated with benefit, the findings generated, a larger set of characteristics of patients likely to experience greater improvement without intervention. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple dietary deficiencies and high rates of infectious illness are major health problems leading to malnutrition and limitation of growth of children in developing countries. Longitudinal studies which provide information on illness incidence and growth velocity are needed in order to untangle the complex interrelationship between nutrition, illness and growth. From 1967 to 1973, researchers led by Dr. Bacon Chow of the Johns Hopkins University School of Hygiene undertook a quasi-experimental prospective study in Suilin Township, Taiwan to determine the effects of a nutritional supplement to the diets of pregnant and lactating women on the growth, development and resistance to disease of their offspring. This dissertation presents results from the analysis of infant morbidity and postnatal growth.^ Maternal nutritional supplementation has no apparent effect on the postnatal growth or morbidity of infants. Significant sex differences exist in growth response to illness and in illness susceptibility. Male infants have more diarrhea and upper respiratory illness. Respiratory illness is positively associated with growth rate in weight in the first semester of life. Diarrhea is significantly negatively associated with growth in length in the second semester. Small-for-date infants are more susceptible to illness in general and have a different pattern of growth response than large-for-date infants.^ Principal components analysis of illness data is shown to be an effective technique for making more precise use of ambiguous morbidity data. Multiple regression with component scores is an accurate method for estimating variance in growth rate predicted by indepenent illness variables. A model is advanced in which initial postnatal growth rate determines subsequent susceptibility to nutritional stress and infection. Initial growth rate is a function of prenatal nutrition, but is not significantly affected by maternal supplementation during gestation or lactation. Critical evaluation is made of nutritional supplementation programs which do not afford disease control.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Errors in the administration of medication represent a significant loss of medical resources and pose life altering or life threatening risks to patients. This paper considered the question, what impact do Computerized Physician Order Entry (CPOE) systems have on medication errors in the hospital inpatient environment? Previous reviews have examined evidence of the impact of CPOE on medication errors, but have come to ambiguous conclusions as to the impact of CPOE and decision support systems (DSS). Forty-three papers were identified. Thirty-one demonstrated a significant reduction in prescribing error rates for all or some drug types; decreases in minor errors were most often reported. Several studies reported increases in the rate of duplicate orders and failures to remove contraindicated drugs, often attributed to inappropriate design or to an inability to operate the system properly. The evidence on the effectiveness of CPOE to reduce errors in medication administration is compelling though it is limited by modest study sample sizes and designs. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Left ventricular outflow tract (LVOT) defects are an important group of congenital heart defects (CHDs) because of their associated mortality and long-term complications. LVOT defects include aortic valve stenosis (AVS), coarctation of aorta (CoA), and hypoplastic left heart syndrome (HLHS). Despite their clinical significance, their etiology is not completely understood. Even though the individual component phenotypes (AVS, CoA, and HLHS) may have different etiologies, they are often "lumped" together in epidemiological studies. Though "lumping" of component phenotypes may improve the power to detect associations, it may also lead to ambiguous findings if these defects are etiologically distinct. This is due to potential for effect heterogeneity across component phenotypes. ^ This study had two aims: (1) to identify the association between various risk factors and both the component (i.e., split) and composite (i.e., lumped) LVOT phenotypes, and (2) to assess the effect heterogeneity of risk factors across component phenotypes of LVOT defects. ^ This study was a secondary data analysis. Primary data were obtained from the Texas Birth Defect Registry (TBDR). TBDR uses an active surveillance method to ascertain birth defects in Texas. All cases of non complex LVOT defects which met our inclusion criteria during the period of 2002–2008 were included in the study. The comparison groups included all unaffected live births for the same period (2002–2008). Data from vital statistics were used to evaluate associations. Statistical associations between selected risk factors and LVOT defects was determined by calculating crude and adjusted prevalence ratio using Poisson regression analysis. Effect heterogeneity was evaluated using polytomous logistic regression. ^ There were a total of 2,353 cases of LVOT defects among 2,730,035 live births during the study period. There were a total of 1,311 definite cases of non-complex LVOT defects for analysis after excluding "complex" cardiac cases and cases associated with syndromes (n=168). Among infant characteristics, males were at a significantly higher risk of developing LVOT defects compared to females. Among maternal characteristics, significant associations were seen with maternal age > 40 years (compared to maternal age 20–24 years) and maternal residence in Texas-Mexico border (compared to non-border residence). Among birth characteristics, significant associations were seen with preterm birth and small for gestation age LVOT defects. ^ When evaluating effect heterogeneity, the following variables had significantly different effects among the component LVOT defect phenotypes: infant sex, plurality, maternal age, maternal race/ethnicity, and Texas-Mexico border residence. ^ This study found significant associations between various demographic factors and LVOT defects. While many findings from this study were consistent with results from previous studies, we also identified new factors associated with LVOT defects. Additionally, this study was the first to assess effect heterogeneity across LVOT defect component phenotypes. These findings contribute to a growing body of literature on characteristics associated with LVOT defects. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The failure rate of health information systems is high, partially due to fragmented, incomplete, or incorrect identification and description of specific and critical domain requirements. In order to systematically transform the requirements of work into real information system, an explicit conceptual framework is essential to summarize the work requirements and guide system design. Recently, Butler, Zhang, and colleagues proposed a conceptual framework called Work Domain Ontology (WDO) to formally represent users’ work. This WDO approach has been successfully demonstrated in a real world design project on aircraft scheduling. However, as a top level conceptual framework, this WDO has not defined an explicit and well specified schema (WDOS) , and it does not have a generalizable and operationalized procedure that can be easily applied to develop WDO. Moreover, WDO has not been developed for any concrete healthcare domain. These limitations hinder the utility of WDO in real world information system in general and in health information system in particular. Objective: The objective of this research is to formalize the WDOS, operationalize a procedure to develop WDO, and evaluate WDO approach using Self-Nutrition Management (SNM) work domain. Method: Concept analysis was implemented to formalize WDOS. Focus group interview was conducted to capture concepts in SNM work domain. Ontology engineering methods were adopted to model SNM WDO. Part of the concepts under the primary goal “staying healthy” for SNM were selected and transformed into a semi-structured survey to evaluate the acceptance, explicitness, completeness, consistency, experience dependency of SNM WDO. Result: Four concepts, “goal, operation, object and constraint”, were identified and formally modeled in WDOS with definitions and attributes. 72 SNM WDO concepts under primary goal were selected and transformed into semi-structured survey questions. The evaluation indicated that the major concepts of SNM WDO were accepted by 41 overweight subjects. SNM WDO is generally independent of user domain experience but partially dependent on SNM application experience. 23 of 41 paired concepts had significant correlations. Two concepts were identified as ambiguous concepts. 8 extra concepts were recommended towards the completeness of SNM WDO. Conclusion: The preliminary WDOS is ready with an operationalized procedure. SNM WDO has been developed to guide future SNM application design. This research is an essential step towards Work-Centered Design (WCD).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A chronology called EDML1 has been developed for the EPICA ice core from Dronning Maud Land (EDML). EDML1 is closely interlinked with EDC3, the new chronology for the EPICA ice core from Dome-C (EDC) through a stratigraphic match between EDML and EDC that consists of 322 volcanic match points over the last 128 ka. The EDC3 chronology comprises a glaciological model at EDC, which is constrained and later selectively tuned using primary dating information from EDC as well as from EDML, the latter being transferred using the tight stratigraphic link between the two cores. Finally, EDML1 was built by exporting EDC3 to EDML. For ages younger than 41 ka BP the new synchronized time scale EDML1/EDC3 is based on dated volcanic events and on a match to the Greenlandic ice core chronology GICC05 via 10Be and methane. The internal consistency between EDML1 and EDC3 is estimated to be typically ~6 years and always less than 450 years over the last 128 ka (always less than 130 years over the last 60 ka), which reflects an unprecedented synchrony of time scales. EDML1 ends at 150 ka BP (2417 m depth) because the match between EDML and EDC becomes ambiguous further down. This hints at a complex ice flow history for the deepest 350 m of the EDML ice core.