931 resultados para CHD Prediction, Blood Serum Data Chemometrics Methods
Resumo:
BACKGROUND Acetabular fractures and surgical interventions used to treat them can result in nerve injuries. To date, only small case studies have tried to explore the frequency of nerve injuries and their association with patient and treatment characteristics. High-quality data on the risk of traumatic and iatrogenic nerve lesions and their epidemiology in relation to different fracture types and surgical approaches are lacking. QUESTIONS/PURPOSES The purpose of this study was to determine (1) the proportion of patients who develop nerve injuries after acetabular fracture; (2) which fracture type(s) are associated with increased nerve injury risk; and (3) which surgical approach was associated with the highest proportion of patients developing nerve injuries using data from the German Pelvic Trauma Registry. Two secondary aims were (4) to assess hospital volume-nerve-injury relationship; and (5) internal data validity. METHODS Between March 2001 and June 2012, 2236 patients with acetabular fractures were entered into a prospectively maintained registry from 29 hospitals; of those, 2073 (92.7%) had complete records on the endpoints of interest in this retrospective study and were analyzed. The neurological status in these patients was captured at their admission and at the discharge. A total of 1395 of 2073 (67%) patients underwent surgery, and the proportions of intervention-related and other hospital-acquired nerve injuries were obtained. Overall proportions of patients developing nerve injuries, risk based on fracture type, and risk of surgical approach type were analyzed. RESULTS The proportion of patients being diagnosed with nerve injuries at hospital admission was 4% (76 of 2073) and at discharge 7% (134 or 2073). Patients with fractures of the "posterior wall" (relative risk [RR], 2.0; 95% confidence interval [CI], 1.4-2.8; p=0.001), "posterior column and posterior wall" (RR, 2.9; CI, 1.6-5.0; p=0.002), and "transverse+posterior wall" fracture (RR, 2.1; CI, 1.3-3.5; p=0.010) were more likely to have nerve injuries at hospital discharge. The proportion of patients with intervention-related nerve injuries and that of patients with other hospital-acquired nerve injuries was 2% (24 of 1395 and 46 of 2073, respectively). They both were associated with the Kocher-Langenbeck approach (RR, 3.0; CI, 1.4-6.2; p=0.006; and RR, 2.4; CI, 1.4-4.3; p=0.004, respectively). CONCLUSIONS Acetabular fractures with the involvement of posterior wall were most commonly accompanied with nerve injuries. The data suggest also that Kocher-Langenbeck approach to the pelvic ring is associated with a higher risk of perioperative nerve injuries. Trauma surgeons should be aware of common nerve injuries, particularly in posterior wall fractures. The results of the study should help provide patients with more exact information on the risk of perioperative nerve injuries in acetabular fractures. LEVEL OF EVIDENCE Level III, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence.
Resumo:
Software developers are often unsure of the exact name of the method they need to use to invoke the desired behavior in a given context. This results in a process of searching for the correct method name in documentation, which can be lengthy and distracting to the developer. We can decrease the method search time by enhancing the documentation of a class with the most frequently used methods. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem - written in the same language and sharing dependencies. We implemented a proof of concept of the approach for Pharo Smalltalk and Java. In Pharo Smalltalk, methods are commonly searched for using a code browser tool called "Nautilus", and in Java using a web browser displaying HTML based documentation - Javadoc. We developed plugins for both browsers and gathered method usage data from open source projects, in order to increase developer productivity by reducing method search time. A small initial evaluation has been conducted showing promising results in improving developer productivity.
Resumo:
Software developers are often unsure of the exact name of the API method they need to use to invoke the desired behavior. Most state-of-the-art documentation browsers present API artefacts in alphabetical order. Albeit easy to implement, alphabetical order does not help much: if the developer knew the name of the required method, he could have just searched for it in the first place. In a context where multiple projects use the same API, and their source code is available, we can improve the API presentation by organizing the elements in the order in which they are more likely to be used by the developer. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem and this data is used then to improve tools. We present a preliminary study on the potential of this approach to improve the API presentation by reducing the time it takes to find the method that implements a given feature. We also briefly present our experience with two proof-of-concept tools implemented for Smalltalk and Java.
Resumo:
Background: In an artificial pancreas (AP), the meals are either manually announced or detected and their size estimated from the blood glucose level. Both methods have limitations, which result in suboptimal postprandial glucose control. The GoCARB system is designed to provide the carbohydrate content of meals and is presented within the AP framework. Method: The combined use of GoCARB with a control algorithm is assessed in a series of 12 computer simulations. The simulations are defined according to the type of the control (open or closed loop), the use or not-use of GoCARB and the diabetics’ skills in carbohydrate estimation. Results: For bad estimators without GoCARB, the percentage of the time spent in target range (70-180 mg/dl) during the postprandial period is 22.5% and 66.2% for open and closed loop, respectively. When the GoCARB is used, the corresponding percentages are 99.7% and 99.8%. In case of open loop, the time spent in severe hypoglycemic events (<50 mg/dl) is 33.6% without the GoCARB and is reduced to 0.0% when the GoCARB is used. In case of closed loop, the corresponding percentage is 1.4% without the GoCARB and is reduced to 0.0% with the GoCARB. Conclusion: The use of GoCARB improves the control of postprandial response and glucose profiles especially in the case of open loop. However, the most efficient regulation is achieved by the combined use of the control algorithm and the GoCARB.
Resumo:
The endocannabinoid system (ECS) comprises the cannabinoid receptors CB1 and CB2 and their endogenous arachidonic acid-derived agonists 2-arachidonoyl glycerol and anandamide, which play important neuromodulatory roles. Recently, a novel class of negative allosteric CB1 receptor peptide ligands, hemopressin-like peptides derived from alpha hemoglobin, has been described, with yet unknown origin and function in the CNS. Using monoclonal antibodies we now identified the localization of RVD-hemopressin (pepcan-12) and N-terminally extended peptide endocannabinoids (pepcans) in the CNS and determined their neuronal origin. Immunohistochemical analyses in rodents revealed distinctive and specific staining in major groups of noradrenergic neurons, including the locus coeruleus (LC), A1, A5 and A7 neurons, which appear to be major sites of production/release in the CNS. No staining was detected in dopaminergic neurons. Peptidergic axons were seen throughout the brain (notably hippocampus and cerebral cortex) and spinal cord, indicative of anterograde axonal transport of pepcans. Intriguingly, the chromaffin cells in the adrenal medulla were also strongly stained for pepcans. We found specific co-expression of pepcans with galanin, both in the LC and adrenal gland. Using LC-MS/MS, pepcan-12 was only detected in non-perfused brain (∼40 pmol/g), suggesting that in the CNS it is secreted and present in extracellular compartments. In adrenal glands, significantly more pepcan-12 (400-700 pmol/g) was measured in both non-perfused and perfused tissue. Thus, chromaffin cells may be a major production site of pepcan-12 found in blood. These data uncover important areas of peptide endocannabinoid occurrence with exclusive noradrenergic immunohistochemical staining, opening new doors to investigate their potential physiological function in the ECS. This article is part of a Special Issue entitled 'Fluorescent Neuro-Ligands'.
Resumo:
The currently proposed space debris remediation measures include the active removal of large objects and “just in time” collision avoidance by deviating the objects using, e.g., ground-based lasers. Both techniques require precise knowledge of the attitude state and state changes of the target objects. In the former case, to devise methods to grapple the target by a tug spacecraft, in the latter, to precisely propagate the orbits of potential collision partners as disturbing forces like air drag and solar radiation pressure depend on the attitude of the objects. Non-resolving optical observations of the magnitude variations, so-called light curves, are a promising technique to determine rotation or tumbling rates and the orientations of the actual rotation axis of objects, as well as their temporal changes. The 1-meter telescope ZIMLAT of the Astronomical Institute of the University of Bern has been used to collect light curves of MEO and GEO objects for a considerable period of time. Recently, light curves of Low Earth Orbit (LEO) targets were acquired as well. We present different observation methods, including active tracking using a CCD subframe readout technique, and the use of a high-speed scientific CMOS camera. Technical challenges when tracking objects with poor orbit redictions, as well as different data reduction methods are addressed. Results from a survey of abandoned rocket upper stages in LEO, examples of abandoned payloads and observations of high area-to-mass ratio debris will be resented. Eventually, first results of the analysis of these light curves are provided.
Resumo:
BACKGROUND: Cardiovascular diseases are the leading cause of death worldwide and in Switzerland. When applied, treatment guidelines for patients with acute ST-segment elevation myocardial infarction (STEMI) improve the clinical outcome and should eliminate treatment differences by sex and age for patients whose clinical situations are identical. In Switzerland, the rate at which STEMI patients receive revascularization may vary by patient and hospital characteristics. AIMS: To examine all hospitalizations in Switzerland from 2010-2011 to determine if patient or hospital characteristics affected the rate of revascularization (receiving either a percutaneous coronary intervention or a coronary artery bypass grafting) in acute STEMI patients. DATA AND METHODS: We used national data sets on hospital stays, and on hospital infrastructure and operating characteristics, for the years 2010 and 2011, to identify all emergency patients admitted with the main diagnosis of acute STEMI. We then calculated the proportion of patients who were treated with revascularization. We used multivariable multilevel Poisson regression to determine if receipt of revascularization varied by patient and hospital characteristics. RESULTS: Of the 9,696 cases we identified, 71.6% received revascularization. Patients were less likely to receive revascularization if they were female, and 80 years or older. In the multivariable multilevel Poisson regression analysis, there was a trend for small-volume hospitals performing fewer revascularizations but this was not statistically significant while being female (Relative Proportion = 0.91, 95% CI: 0.86 to 0.97) and being older than 80 years was still associated with less frequent revascularization. CONCLUSION: Female and older patients were less likely to receive revascularization. Further research needs to clarify whether this reflects differential application of treatment guidelines or limitations in this kind of routine data.
Resumo:
BACKGROUND Canine S100 calcium-binding protein A12 (cS100A12) shows promise as biomarker of inflammation in dogs. A previously developed cS100A12-radioimmunoassay (RIA) requires radioactive tracers and is not sensitive enough for fecal cS100A12 concentrations in 79% of tested healthy dogs. An ELISA assay may be more sensitive than RIA and does not require radioactive tracers. OBJECTIVE The purpose of the study was to establish a sandwich ELISA for serum and fecal cS100A12, and to establish reference intervals (RI) for normal healthy canine serum and feces. METHODS Polyclonal rabbit anti-cS100A12 antibodies were generated and tested by Western blotting and immunohistochemistry. A sandwich ELISA was developed and validated, including accuracy and precision, and agreement with cS100A12-RIA. The RI, stability, and biologic variation in fecal cS100A12, and the effect of corticosteroids on serum cS100A12 were evaluated. RESULTS Lower detection limits were 5 μg/L (serum) and 1 ng/g (fecal), respectively. Intra- and inter-assay coefficients of variation were ≤ 4.4% and ≤ 10.9%, respectively. Observed-to-expected ratios for linearity and spiking recovery were 98.2 ± 9.8% (mean ± SD) and 93.0 ± 6.1%, respectively. There was a significant bias between the ELISA and the RIA. The RI was 49-320 μg/L for serum and 2-484 ng/g for fecal cS100A12. Fecal cS100A12 was stable for 7 days at 23, 4, -20, and -80°C; biologic variation was negligible but variation within one fecal sample was significant. Corticosteroid treatment had no clinically significant effect on serum cS100A12 concentrations. CONCLUSIONS The cS100A12-ELISA is a precise and accurate assay for serum and fecal cS100A12 in dogs.
Resumo:
Social capital, a relatively new public health concept, represents the intangible resources embedded in social relationships that facilitate collective action. Current interest in the concept stems from empirical studies linking social capital with health outcomes. However, in order for social capital to function as a meaningful research variable, conceptual development aimed at refining the domains, attributes, and boundaries of the concept are needed. An existing framework of social capital (Uphoff, 2000), developed from studies in India, was selected for congruence with the inductive analysis of pilot data from a community that was unsuccessful at mobilizing collective action. This framework provided the underpinnings for a formal ethnographic research study designed to examine the components of social capital in a community that had successfully mobilized collective action. The specific aim of the ethnographic study was to examine the fittingness of Uphoff's framework in the contrasting American community. A contrasting context was purposefully selected to distinguish essential attributes of social capital from those that were specific to one community. Ethnographic data collection methods included participant observation, formal interviews, and public documents. Data was originally analyzed according to codes developed from Uphoff's theoretical framework. The results from this analysis were only partially satisfactory, indicating that the theoretical framework required refinement. The refinement of the coding system resulted in the emergence of an explanatory theory of social capital that was tested with the data collected from formal fieldwork. Although Uphoff's framework was useful, the refinement of the framework revealed, (1) trust as the dominant attribute of social capital, (2) efficacy of mutually beneficial collective action as the outcome indicator, (3) cognitive and structural domains more appropriately defined as the cultural norms of the community and group, and (4) a definition of social capital as the combination of the cognitive norms of the community and the structural norms of the group that are either constructive or destructive to the development of trust and the efficacy of mutually beneficial collective action. This explanatory framework holds increased pragmatic utility for public health practice and research. ^
Resumo:
Case control and retrospective studies have identified parental substance abuse as a risk factor for physical child abuse and neglect (Dore, Doris, & Wright, 1995, May; S. R. Dube et al., 2001; Guterman & Lee, 2005, May; Walsh, MacMillan, & Jamieson, 2003). The purpose of this paper is to present the findings of a systematic review of prospective studies from 1975 through 2005 that include parental substance abuse as a risk factor for physical child abuse or neglect. Characteristics of each study such as the research question, sample information, data collection methods and results, including the parent assessed and definitions of substance abuse and physical child abuse and neglect, are discussed. Five studies were identified that met the search criteria. Four of five studies found that parental substance abuse was a significant variable in predicting physical child abuse and neglect.^
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Patients who had started HAART (Highly Active Anti-Retroviral Treatment) under previous aggressive DHHS guidelines (1997) underwent a life-long continuous HAART that was associated with many short term as well as long term complications. Many interventions attempted to reduce those complications including intermittent treatment also called pulse therapy. Many studies were done to study the determinants of rate of fall in CD4 count after interruption as this data would help guide treatment interruptions. The data set used here was a part of a cohort study taking place at the Johns Hopkins AIDS service since January 1984, in which the data were collected both prospectively and retrospectively. The patients in this data set consisted of 47 patients receiving via pulse therapy with the aim of reducing the long-term complications. ^ The aim of this project was to study the impact of virologic and immunologic factors on the rate of CD4 loss after treatment interruption. The exposure variables under investigation included CD4 cell count and viral load at treatment initiation. The rates of change of CD4 cell count after treatment interruption was estimated from observed data using advanced longitudinal data analysis methods (i.e., linear mixed model). Using random effects accounted for repeated measures of CD4 per person after treatment interruption. The regression coefficient estimates from the model was then used to produce subject specific rates of CD4 change accounting for group trends in change. The exposure variables of interest were age, race, and gender, CD4 cell counts and HIV RNA levels at HAART initiation. ^ The rate of fall of CD4 count did not depend on CD4 cell count or viral load at initiation of treatment. Thus these factors may not be used to determine who can have a chance of successful treatment interruption. CD4 and viral load were again studied by t-tests and ANOVA test after grouping based on medians and quartiles to see any difference in means of rate of CD4 fall after interruption. There was no significant difference between the groups suggesting that there was no association between rate of fall of CD4 after treatment interruption and above mentioned exposure variables. ^
Resumo:
Medication errors, one of the most frequent types of medical errors, are a common cause of patient harm in hospital systems today. Nurses at the bedside are in a position to encounter many of these errors since they are there at the start of the process (ordering/prescribing) and the end of the process (administration). One of the recommendations from the IOM (Institute of Medicine) report, "To Err is Human," was for organizations to identify and learn from medical errors through event reporting systems. While many organizations have reporting systems in place, research studies report a significant amount of underreporting by nurses. A systematic review of the literature was performed to identify contributing factors related to the reporting and not reporting of medication errors by nurses at the bedside.^ Articles included in the literature review were primary or secondary studies, dated January 1, 2000 – July 2009, related to nursing medication error reporting. All 634 articles were reviewed with an algorithm developed to standardize the review process and help filter out those that did not meet the study criteria. In addition, 142 article bibliographies were reviewed to find additional studies that were not found in the original literature search.^ After reviewing the 634 articles and the additional 108 articles discovered in the bibliography review, 41 articles met the study criteria and were used in the systematic literature review results.^ Fear of punitive reactions to medication errors was a frequent barrier to error reporting. Nurses fear reactions from their leadership, peers, patients and their families, nursing boards, and the media. Anonymous reporting systems and departments/organizations with a strong safety culture in place helped to encourage the reporting of medication errors by nursing staff.^ Many of the studies included in this literature review do not allow results that can be generalized. The majority of them took place in single institutions/organizations with limited sample sizes. Stronger studies with larger sample sizes need to be performed, utilizing data collection methods that have been validated, to determine stronger correlations between safety cultures and nurse error reporting.^
Resumo:
Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^
Resumo:
Hunting is assuming a growing role in the current European forestry and agroforestry landscape. However, consistent statistical sources that provide quantitative information for policy-making, planning and management of game resources are often lacking. In addition, in many instances statistical information can be used without sufficient evaluation or criticism. Recently, the European Commission has declared the importance of high quality hunting statistics and the need to set up a common scheme in Europe for their collection, interpretation and proper use. This work aims to contribute to this current debate on hunting statistics in Europe by exploring data from the last 35 years of Spanish hunting statistics. The analysis focuses on the three major pillars underpinning hunting activity: hunters, hunting grounds and game animals. First, the study aims to provide a better understanding of official hunting statistics for use by researchers, game managers and other potential users. Second, the study highlights the major strengths and weaknesses of the statistical information that was collected. The results of the analysis indicate that official hunting statistics can be incomplete, dispersed and not always homogeneous over a long period of time. This is an issue of which one should be aware when using official hunting data for scientific or technical work. To improve statistical deficiencies associated with hunting data in Spain, our main suggestion is the adoption of a common protocol on data collection to which different regions agree. This protocol should be in accordance with future European hunting statistics and based on robust and well-informed data collection methods. Also it should expand the range of biological, ecological and economic concepts currently included to take account of the profound transformations experienced by the hunting sector in recent years. As much as possible, any future changes in the selection of hunting statistics should allow for comparisons between new variables with the previous ones.