841 resultados para Systematic analysis
Resumo:
OBJECTIVE: In order to improve the quality of our Emergency Medical Services (EMS), to raise bystander cardiopulmonary resuscitation rates and thereby meet what is becoming a universal standard in terms of quality of emergency services, we decided to implement systematic dispatcher-assisted or telephone-CPR (T-CPR) in our medical dispatch center, a non-Advanced Medical Priority Dispatch System. The aim of this article is to describe the implementation process, costs and results following the introduction of this new "quality" procedure. METHODS: This was a prospective study. Over an 8-week period, our EMS dispatchers were given new procedures to provide T-CPR. We then collected data on all non-traumatic cardiac arrests within our state (Vaud, Switzerland) for the following 12months. For each event, the dispatchers had to record in writing the reason they either ruled out cardiac arrest (CA) or did not propose T-CPR in the event they did suspect CA. All emergency call recordings were reviewed by the medical director of the EMS. The analysis of the recordings and the dispatchers' written explanations were then compared. RESULTS: During the 12-month study period, a total of 497 patients (both adults and children) were identified as having a non-traumatic cardiac arrest. Out of this total, 203 cases were excluded and 294 cases were eligible for T-CPR. Out of these eligible cases, dispatchers proposed T-CPR on 202 occasions (or 69% of eligible cases). They also erroneously proposed T-CPR on 17 occasions when a CA was wrongly identified (false positive). This represents 7.8% of all T-CPR. No costs were incurred to implement our study protocol and procedures. CONCLUSIONS: This study demonstrates it is possible, using a brief campaign of sensitization but without any specific training, to implement systematic dispatcher-assisted cardiopulmonary resuscitation in a non-Advanced Medical Priority Dispatch System such as our EMS that had no prior experience with systematic T-CPR. The results in terms of T-CPR delivery rate and false positive are similar to those found in previous studies. We found our results satisfying the given short time frame of this study. Our results demonstrate that it is possible to improve the quality of emergency services at moderate or even no additional costs and this should be of interest to all EMS that do not presently benefit from using T-CPR procedures. EMS that currently do not offer T-CPR should consider implementing this technique as soon as possible, and we expect our experience may provide answers to those planning to incorporate T-CPR in their daily practice.
Resumo:
Bacteria are generally difficult specimens to prepare for conventional resin section electron microscopy and mycobacteria, with their thick and complex cell envelope layers being especially prone to artefacts. Here we made a systematic comparison of different methods for preparing Mycobacterium smegmatis for thin section electron microscopy analysis. These methods were: (1) conventional preparation by fixatives and epoxy resins at ambient temperature. (2) Tokuyasu cryo-section of chemically fixed bacteria. (3) rapid freezing followed by freeze substitution and embedding in epoxy resin at room temperature or (4) combined with Lowicryl HM20 embedding and ultraviolet (UV) polymerization at low temperature and (5) CEMOVIS, or cryo electron microscopy of vitreous sections. The best preservation of bacteria was obtained with the cryo electron microscopy of vitreous sections method, as expected, especially with respect to the preservation of the cell envelope and lipid bodies. By comparison with cryo electron microscopy of vitreous sections both the conventional and Tokuyasu methods produced different, undesirable artefacts. The two different types of freeze-substitution protocols showed variable preservation of the cell envelope but gave acceptable preservation of the cytoplasm, but not lipid bodies, and bacterial DNA. In conclusion although cryo electron microscopy of vitreous sections must be considered the 'gold standard' among sectioning methods for electron microscopy, because it avoids solvents and stains, the use of optimally prepared freeze substitution also offers some advantages for ultrastructural analysis of bacteria.
Resumo:
The taxonomy of Bambusoideae is in a state of flux and phylogenetic studies are required to help resolve systematic issues. Over 60 taxa, representing all subtribes of Bambuseae and related non-bambusoid grasses were sampled. A combined analysis of five plastid DNA regions, trnL intron, trnL-F intergenic spacer, atpB-rbcL intergenic spacer, rps16 intron, and matK, was used to study the phylogenetic relationships among the bamboos in general and the woody bamboos in particular. Within the BEP clade (Bambusoideae s.s., Ehrhartoideae, Pooideae), Pooideae were resolved as sister to Bambusoideae s.s. Tribe Bambuseae, the woody bamboos, as currently recognized were not monophyletic because Olyreae, the herbaceous bamboos, were sister to tropical Bambuseae. Temperate Bambuseae were sister to the group consisting of tropical Bambuseae and Olyreae. Thus, the temperate Bambuseae would be better treated as their own tribe Arundinarieae than as a subgroup of Bambuseae. Within the tropical Bambuseae, neotropical Bambuseae were sister to the palaeotropical and Austral Bambuseae. In addition, Melocanninae were found to be sister to the remaining palaeotropical and Austral Bambuseae. We discuss phylogenetic and morphological patterns of diversification and interpret them in a biogeographic context.
Resumo:
The spatial variability of strongly weathered soils under sugarcane and soybean/wheat rotation was quantitatively assessed on 33 fields in two regions in São Paulo State, Brazil: Araras (15 fields with sugarcane) and Assis (11 fields with sugarcane and seven fields with soybean/wheat rotation). Statistical methods used were: nested analysis of variance (for 11 fields), semivariance analysis and analysis of variance within and between fields. Spatial levels from 50 m to several km were analyzed. Results are discussed with reference to a previously published study carried out in the surroundings of Passo Fundo (RS). Similar variability patterns were found for clay content, organic C content and cation exchange capacity. The fields studied are quite homogeneous with respect to these relatively stable soil characteristics. Spatial variability of other characteristics (resin extractable P, pH, base- and Al-saturation and also soil colour), varies with region and, or land use management. Soil management for sugarcane seems to have induced modifications to greater depths than for soybean/wheat rotation. Surface layers of soils under soybean/wheat present relatively little variation, apparently as a result of very intensive soil management. The major part of within-field variation occurs at short distances (< 50 m) in all study areas. Hence, little extra information would be gained by increasing sampling density from, say, 1/km² to 1/50 m². For many purposes, the soils in the study regions can be mapped with the same observation density, but residual variance will not be the same in all areas. Bulk sampling may help to reveal spatial patterns between 50 and 1.000 m.
Resumo:
In recent years, several authors have revised the calibrations used to compute physical parameters (tex2html_wrap_inline498, tex2html_wrap_inline500, log g, [Fe/H]) from intrinsic colours in the tex2html_wrap_inline504 photometric system. For reddened stars, these intrinsic colours can be computed through the standard relations among colour indices for each of the regions defined by Strömgren (1966) on the HR diagram. We present a discussion of the coherence of these calibrations for main-sequence stars. Stars from open clusters are used to carry out this analysis. Assuming that individual reddening values and distances should be similar for all the members of a given open cluster, systematic differences among the calibrations used in each of the photometric regions might arise when comparing mean reddening values and distances for the members of each region. To classify the stars into Strömgren's regions we extended the algorithm presented by Figueras et al. (1991) to a wider range of spectral types and luminosity classes. The observational ZAMS are compared with the theoretical ZAMS from stellar evolutionary models, in the range tex2html_wrap_inline506 K. The discrepancies are also discussed.
Resumo:
BACKGROUND: The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. METHODOLOGY/PRINCIPAL FINDINGS: In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. CONCLUSIONS: This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Resumo:
Several population pharmacokinetic (PPK) analyses of the anticancer drug imatinib have been performed to investigate different patient populations and covariate effects. The present analysis offers a systematic qualitative and quantitative summary and comparison of those. Its primary objective was to provide useful information for evaluating the expectedness of imatinib plasma concentration measurements in the frame of therapeutic drug monitoring. The secondary objective was to review clinically important concentration-effect relationships to provide help in evaluating the potential suitability of plasma concentration values. Nine PPK models describing total imatinib plasma concentration were identified. Parameter estimates were standardized to common covariate values whenever possible. Predicted median exposure (Cmin) was derived by simulations and ranged between models from 555 to 1388 ng/mL (grand median: 870 ng/mL and interquartile "reference" range: 520-1390 ng/mL). Covariates of potential clinical importance (up to 30% change in pharmacokinetic predicted by at least 1 model) included body weight, albumin, α1 acid glycoprotein, and white blood cell count. Various other covariates were included but were statistically not significant or seemed clinically less important or physiologically controversial. Concentration-response relationships had more importance below the average reference range and concentration-toxicity relationships above. Therapeutic drug monitoring-guided dosage adjustment seems justified for imatinib, but a formal predictive therapeutic range remains difficult to propose in the absence of prospective target concentration intervention trials. To evaluate the expectedness of a drug concentration measurement in practice, this review allows comparison of the measurement either to the average reference range or to a specific range accounting for individual patient characteristics. For future research, external PPK model validation or meta-model development should be considered.
Resumo:
BACKGROUND: Synthesizing research evidence using systematic and rigorous methods has become a key feature of evidence-based medicine and knowledge translation. Systematic reviews (SRs) may or may not include a meta-analysis depending on the suitability of available data. They are often being criticised as 'secondary research' and denied the status of original research. Scientific journals play an important role in the publication process. How they appraise a given type of research influences the status of that research in the scientific community. We investigated the attitudes of editors of core clinical journals towards SRs and their value for publication.¦METHODS: We identified the 118 journals labelled as "core clinical journals" by the National Library of Medicine, USA in April 2009. The journals' editors were surveyed by email in 2009 and asked whether they considered SRs as original research projects; whether they published SRs; and for which section of the journal they would consider a SR manuscript.¦RESULTS: The editors of 65 journals (55%) responded. Most respondents considered SRs to be original research (71%) and almost all journals (93%) published SRs. Several editors regarded the use of Cochrane methodology or a meta-analysis as quality criteria; for some respondents these criteria were premises for the consideration of SRs as original research. Journals placed SRs in various sections such as "Review" or "Feature article". Characterization of non-responding journals showed that about two thirds do publish systematic reviews.¦DISCUSSION: Currently, the editors of most core clinical journals consider SRs original research. Our findings are limited by a non-responder rate of 45%. Individual comments suggest that this is a grey area and attitudes differ widely. A debate about the definition of 'original research' in the context of SRs is warranted.
Resumo:
Objective: Sentinel lymph node biopsy (SLNB) is a validated staging technique for breast carcinoma. Some women are exposed to have a second SLNB due to breast cancer recurrence or a second neoplasia (breast or other). Due to modi- fied anatomy, it has been claimed that previous axillary surgery represents a contra-indication to SLNB. Our objective was to analyse the literature to assess if a second SLNB is to be recommended or not. Methods: For the present study, we performed a review of all published data during the last 10 years on patients with previous axilla surgery and second SLNB. Results: Our analysis shows that second SLNB is feasible in 70%. Extra-axillary SNs rate (31%) was higher after radical lymph node dissection (ALND) (60% - 84%) than after SLNB alone (14% - 65%). Follow-up and com- plementary ALND following negative and positive second SLNB shows that it is a reliable procedure. Conclusion: The review of literature confirms that SLNB is feasible after previous axillary dissection. Triple technique for SN mapping is the best examination to highlight modified lymphatic anatomy and shows definitively where SLNB must be per- formed. Surgery may be more demanding as patients may have more frequently extra-axillary SN only, like internal mammary nodes. ALND can be avoided when second SLNB harvests negative SNs. These conclusions should however be taken with caution because of the heterogeneity of publications regarding SLNB and surgical technique.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
OBJECTIVE: Little is known regarding health-related quality of life and its relation with physical activity level in the general population. Our primary objective was to systematically review data examining this relationship. METHODS: We systematically searched MEDLINE, EMBASE, CINAHL, and PsycINFO for health-related quality of life and physical activity related keywords in titles, abstracts, or indexing fields. RESULTS: From 1426 retrieved references, 55 citations were judged to require further evaluation. Fourteen studies were retained for data extraction and analysis; seven were cross-sectional studies, two were cohort studies, four were randomized controlled trials and one used a combined cross sectional and longitudinal design. Thirteen different methods of physical activity assessment were used. Most health-related quality of life instruments related to the Medical Outcome Study SF-36 questionnaire. Cross-sectional studies showed a consistently positive association between self-reported physical activity and health-related quality of life. The largest cross-sectional study reported an adjusted odds ratio of "having 14 or more unhealthy days" during the previous month to be 0.40 (95% Confidence Interval 0.36-0.45) for those meeting recommended levels of physical activity compared to inactive subjects. Cohort studies and randomized controlled trials tended to show a positive effect of physical activity on health-related quality of life, but similar to the cross-sectional studies, had methodological limitations. CONCLUSION: Cross-sectional data showed a consistently positive association between physical activity level and health-related quality of life. Limited evidence from randomized controlled trials and cohort studies precludes a definitive statement about the nature of this association.
Resumo:
Purpose: The aim of this study was to evaluate the clinical fracture rate of crowns fabricated with the pressable, leucite-reinforced ceramic IPS Empress, and relate the results to the type of tooth restored. Materials and Methods: The database SCOPUS was searched for clinical studies involving full-coverage crowns made of IPS Empress. To assess the fracture rate of the crowns in relation to the type of restored tooth and study, Poisson regression analysis was used. Results: Seven clinical studies were identified involving 1,487 adhesively luted crowns (mean observation time: 4.5 +/- 1.7 years) and 81 crowns cemented with zinc-phosphate cement (mean observation time: 1.6 +/- 0.8 years). Fifty-seven of the adhesively luted crowns fractured (3.8%). The majority of fractures (62%) occurred between the third and sixth year after placement. There was no significant influence regarding the test center on fracture rate, but the restored tooth type played a significant role. The hazard rate (per year) for crowns was estimated to be 5 in every 1,000 crowns for incisors, 7 in every 1,000 crowns for premolars, 12 in every 1,000 crowns for canines, and 16 in every 1,000 crowns for molars. One molar crown in the zinc-phosphate group fractured after 1.2 years. Conclusion: Adhesively luted IPS Empress crowns showed a low fracture rate for incisors and premolars and a somewhat higher rate for molars and canines. The sample size of the conventionally luted crowns was too small and the observation period too short to draw meaningful conclusions. Int J Prosthodont 2010;23:129-133.
Resumo:
Accurate prediction of transcription factor binding sites is needed to unravel the function and regulation of genes discovered in genome sequencing projects. To evaluate current computer prediction tools, we have begun a systematic study of the sequence-specific DNA-binding of a transcription factor belonging to the CTF/NFI family. Using a systematic collection of rationally designed oligonucleotides combined with an in vitro DNA binding assay, we found that the sequence specificity of this protein cannot be represented by a simple consensus sequence or weight matrix. For instance, CTF/NFI uses a flexible DNA binding mode that allows for variations of the binding site length. From the experimental data, we derived a novel prediction method using a generalised profile as a binding site predictor. Experimental evaluation of the generalised profile indicated that it accurately predicts the binding affinity of the transcription factor to natural or synthetic DNA sequences. Furthermore, the in vitro measured binding affinities of a subset of oligonucleotides were found to correlate with their transcriptional activities in transfected cells. The combined computational-experimental approach exemplified in this work thus resulted in an accurate prediction method for CTF/NFI binding sites potentially functioning as regulatory regions in vivo.
Resumo:
Background and Purpose-The safety and efficacy of thrombolysis in cervical artery dissection (CAD) are controversial. The aim of this meta-analysis was to pool all individual patient data and provide a valid estimate of safety and outcome of thrombolysis in CAD.Methods-We performed a systematic literature search on intravenous and intra-arterial thrombolysis in CAD. We calculated the rates of pooled symptomatic intracranial hemorrhage and mortality and indirectly compared them with matched controls from the Safe Implementation of Thrombolysis in Stroke-International Stroke Thrombolysis Register. We applied multivariate regression models to identify predictors of excellent (modified Rankin Scale=0 to 1) and favorable (modified Rankin Scale=0 to 2) outcome.Results-We obtained individual patient data of 180 patients from 14 retrospective series and 22 case reports. Patients were predominantly female (68%), with a mean +/- SD age of 46 +/- 11 years. Most patients presented with severe stroke (median National Institutes of Health Stroke Scale score=16). Treatment was intravenous thrombolysis in 67% and intra-arterial thrombolysis in 33%. Median follow-up was 3 months. The pooled symptomatic intracranial hemorrhage rate was 3.1% (95% CI, 1.3 to 7.2). Overall mortality was 8.1% (95% CI, 4.9 to 13.2), and 41.0% (95% CI, 31.4 to 51.4) had an excellent outcome. Stroke severity was a strong predictor of outcome. Overlapping confidence intervals of end points indicated no relevant differences with matched controls from the Safe Implementation of Thrombolysis in Stroke-International Stroke Thrombolysis Register.Conclusions-Safety and outcome of thrombolysis in patients with CAD-related stroke appear similar to those for stroke from all causes. Based on our findings, thrombolysis should not be withheld in patients with CAD. (Stroke. 2011;42:2515-2520.)