893 resultados para likelihood to publication
Resumo:
Documenting the Neotropical amphibian diversity has become a major challenge facing the threat of global climate change and the pace of environmental alteration. Recent molecular phylogenetic studies have revealed that the actual number of species in South American tropical forests is largely underestimated, but also that many lineages are millions of years old. The genera Phyzelaphryne (1 sp.) and Adelophryne (6 spp.), which compose the subfamily Phyzelaphryninae, include poorly documented, secretive, and minute frogs with an unusual distribution pattern that encompasses the biotic disjunction between Amazonia and the Atlantic forest. We generated >5.8 kb sequence data from six markers for all seven nominal species of the subfamily as well as for newly discovered populations in order to (1) test the monophyly of Phyzelaphryninae, Adelophryne and Phyzelaphryne, (2) estimate species diversity within the subfamily, and (3) investigate their historical biogeography and diversification. Phylogenetic reconstruction confirmed the monophyly of each group and revealed deep subdivisions within Adelophryne and Phyzelaphryne, with three major clades in Adelophryne located in northern Amazonia, northern Atlantic forest and southern Atlantic forest. Our results suggest that the actual number of species in Phyzelaphryninae is, at least, twice the currently recognized species diversity, with almost every geographically isolated population representing an anciently divergent candidate species. Such results highlight the challenges for conservation, especially in the northern Atlantic forest where it is still degraded at a fast pace. Molecular dating revealed that Phyzelaphryninae originated in Amazonia and dispersed during early Miocene to the Atlantic forest. The two Atlantic forest clades of Adelophryne started to diversify some 7 Ma minimum, while the northern Amazonian Adelophryne diversified much earlier, some 13 Ma minimum. This striking biogeographic pattern coincides with major events that have shaped the face of the South American continent, as we know it today. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
This paper considers likelihood-based inference for the family of power distributions. Widely applicable results are presented which can be used to conduct inference for all three parameters of the general location-scale extension of the family. More specific results are given for the special case of the power normal model. The analysis of a large data set, formed from density measurements for a certain type of pollen, illustrates the application of the family and the results for likelihood-based inference. Throughout, comparisons are made with analogous results for the direct parametrisation of the skew-normal distribution.
Resumo:
OBJECTIVE: To estimate the pretest probability of Cushing's syndrome (CS) diagnosis by a Bayesian approach using intuitive clinical judgment. MATERIALS AND METHODS: Physicians were requested, in seven endocrinology meetings, to answer three questions: "Based on your personal expertise, after obtaining clinical history and physical examination, without using laboratorial tests, what is your probability of diagnosing Cushing's Syndrome?"; "For how long have you been practicing Endocrinology?"; and "Where do you work?". A Bayesian beta regression, using the WinBugs software was employed. RESULTS: We obtained 294 questionnaires. The mean pretest probability of CS diagnosis was 51.6% (95%CI: 48.7-54.3). The probability was directly related to experience in endocrinology, but not with the place of work. CONCLUSION: Pretest probability of CS diagnosis was estimated using a Bayesian methodology. Although pretest likelihood can be context-dependent, experience based on years of practice may help the practitioner to diagnosis CS. Arq Bras Endocrinol Metab. 2012;56(9):633-7
Resumo:
Background: We aimed to establish values and parameters using multislice reconstruction in axial computerized tomography (CT) in order to quantify the erosion of the glenoid cavity in cases of shoulder instability. Methods: We studied two groups using CT. Group I had normal subjects and Group II had patients with shoulder instability. We measured values of the vertical segment, the superior horizontal, medial and inferior segments, and also calculated the ratio of the horizontal superior and inferior segments of the glenoid cavity in both normal subjects and those with shoulder instability. These variables were recorded during arthroscopy for cases with shoulder instability. Results The mean values were 40.87 mm, 17.86 mm, 26.50 mm, 22.86 mm and 0.79 for vertical segment, the superior horizontal, medial and inferior segments, and the ratio between horizontal superior and inferior segments of the glenoid cavity respectively, in normal subjects. For subjects with unstable shoulders the mean values were 37.33 mm, 20.83 mm, 23.07 mm and 0.91 respectively. Arthroscopic measurements yielded an inferior segment value of 24.48 mm with a loss of 2.39 mm (17.57%). The ratio between the superior and inferior segments of the glenoid cavity was 0.79. This value can be used as a normative value for evaluating degree of erosion of the anterior border of the glenoid cavity. However, values found using CT should not be used on a comparative basis with values found during arthroscopy. Conclusions: Computerized tomographic measurements of the glenoid cavity yielded reliable values consistent with those in the literature.
Resumo:
An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.
Resumo:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
Since the publication of the book of Russell and Burch in 1959, scientific research has never stopped improving itself with regard to the important issue of animal experimentation. The European Directive 2010/63/EU “On the protection of animals used for scientific purposes” focuses mainly on the animal welfare, fixing the Russell and Burch’s 3Rs principles as the foundations of the document. In particular, the legislator clearly states the responsibility of the scientific community to improve the number of alternative methods to animal experimentation. The swine is considered a species of relevant interest for translational research and medicine due to its biological similarities with humans. The surgical community has, in fact, recognized the swine as an excellent model replicating the human cardiovascular system. There have been several wild-type and transgenic porcine models which were produced for biomedicine and translational research. Among these, the cardiovascular ones are the most represented. The continuous involvement of the porcine animal model in the biomedical research, as the continuous advances achieved using swine in translational medicine, support the need for alternative methods to animal experimentation involving pigs. The main purpose of the present work was to develop and characterize novel porcine alternative methods for cardiovascular translational biology/medicine. The work was mainly based on two different models: the first consisted in an ex vivo culture of porcine aortic cylinders and the second consisted in an in vitro culture of porcine aortic derived progenitor cells. Both the models were properly characterized and results indicated that they could be useful to the study of vascular biology. Nevertheless, both the models aim to reduce the use of experimental animals and to refine animal based-trials. In conclusion, the present research aims to be a small, but significant, contribution to the important and necessary field of study of alternative methods to animal experimentation.
Resumo:
Chemotherapy-induced neutropenia is a major risk factor for infection-related morbidity and mortality and also a significant dose-limiting toxicity in cancer treatment. Patients developing severe (grade 3/4) or febrile neutropenia (FN) during chemotherapy frequently receive dose reductions and/or delays to their chemotherapy. This may impact the success of treatment, particularly when treatment intent is either curative or to prolong survival. In Europe, prophylactic treatment with granulocyte-colony stimulating factors (G-CSFs), such as filgrastim (including approved biosimilars), lenograstim or pegfilgrastim is available to reduce the risk of chemotherapy-induced neutropenia. However, the use of G-CSF prophylactic treatment varies widely in clinical practice, both in the timing of therapy and in the patients to whom it is offered. The need for generally applicable, European-focused guidelines led to the formation of a European Guidelines Working Party by the European Organisation for Research and Treatment of Cancer (EORTC) and the publication in 2006 of guidelines for the use of G-CSF in adult cancer patients at risk of chemotherapy-induced FN. A new systematic literature review has been undertaken to ensure that recommendations are current and provide guidance on clinical practice in Europe. We recommend that patient-related adverse risk factors, such as elderly age (≥65 years) and neutrophil count be evaluated in the overall assessment of FN risk before administering each cycle of chemotherapy. It is important that after a previous episode of FN, patients receive prophylactic administration of G-CSF in subsequent cycles. We provide an expanded list of common chemotherapy regimens considered to have a high (≥20%) or intermediate (10-20%) risk of FN. Prophylactic G-CSF continues to be recommended in patients receiving a chemotherapy regimen with high risk of FN. When using a chemotherapy regimen associated with FN in 10-20% of patients, particular attention should be given to patient-related risk factors that may increase the overall risk of FN. In situations where dose-dense or dose-intense chemotherapy strategies have survival benefits, prophylactic G-CSF support is recommended. Similarly, if reductions in chemotherapy dose intensity or density are known to be associated with a poor prognosis, primary G-CSF prophylaxis may be used to maintain chemotherapy. Clinical evidence shows that filgrastim, lenograstim and pegfilgrastim have clinical efficacy and we recommend the use of any of these agents to prevent FN and FN-related complications where indicated. Filgrastim biosimilars are also approved for use in Europe. While other forms of G-CSF, including biosimilars, are administered by a course of daily injections, pegfilgrastim allows once-per-cycle administration. Choice of formulation remains a matter for individual clinical judgement. Evidence from multiple low level studies derived from audit data and clinical practice suggests that some patients receive suboptimal daily G-CSFs; the use of pegfilgrastim may avoid this problem.
Resumo:
Increasing awareness of the importance of cardiovascular prevention is not yet matched by the resources and actions within health care systems. Recent publication of the European Commission's European Heart Health Charter in 2008 prompts a review of the role of cardiac rehabilitation (CR) to cardiovascular health outcomes. Secondary prevention through exercise-based CR is the intervention with the best scientific evidence to contribute to decrease morbidity and mortality in coronary artery disease, in particular after myocardial infarction but also incorporating cardiac interventions and chronic stable heart failure. The present position paper aims to provide the practical recommendations on the core components and goals of CR intervention in different cardiovascular conditions, to assist in the design and development of the programmes, and to support healthcare providers, insurers, policy makers and consumers in the recognition of the comprehensive nature of CR. Those charged with responsibility for secondary prevention of cardiovascular disease, whether at European, national or individual centre level, need to consider where and how structured programmes of CR can be delivered to all patients eligible. Thus a novel, disease-oriented document has been generated, where all components of CR for cardiovascular conditions have been revised, presenting both well-established and controversial aspects. A general table applicable to all cardiovascular conditions and specific tables for each clinical disease have been created and commented.
Resumo:
When estimating the effect of treatment on HIV using data from observational studies, standard methods may produce biased estimates due to the presence of time-dependent confounders. Such confounding can be present when a covariate, affected by past exposure, is both a predictor of the future exposure and the outcome. One example is the CD4 cell count, being a marker for disease progression for HIV patients, but also a marker for treatment initiation and influenced by treatment. Fitting a marginal structural model (MSM) using inverse probability weights is one way to give appropriate adjustment for this type of confounding. In this paper we study a simple and intuitive approach to estimate similar treatment effects, using observational data to mimic several randomized controlled trials. Each 'trial' is constructed based on individuals starting treatment in a certain time interval. An overall effect estimate for all such trials is found using composite likelihood inference. The method offers an alternative to the use of inverse probability of treatment weights, which is unstable in certain situations. The estimated parameter is not identical to the one of an MSM, it is conditioned on covariate values at the start of each mimicked trial. This allows the study of questions that are not that easily addressed fitting an MSM. The analysis can be performed as a stratified weighted Cox analysis on the joint data set of all the constructed trials, where each trial is one stratum. The model is applied to data from the Swiss HIV cohort study.
Resumo:
Bridging the gap between research and policy is of growing importance in international development. The National Centre of Competence in Research (NCCR) North-South has rich experience in collaborating beyond academic boundaries to make their research relevant to various societal actors. This publication is the first to provide an overview of the effectiveness of NCCR North-South researchers’ efforts to interact with policy, practice, and local communities with a view to effecting a change in practices. A systematic assessment of researchers’ interactions with non-academic partners is presented, based on principles of monitoring and evaluation. On this basis, tools for collective learning and widespread adaptation are proposed. The report shows with what types of societal actors NCCR North-South researchers collaborate and analyses examples of how researchers conduct dialogue beyond academic boundaries, leading to specific outcomes. It also explains the frame conditions considered decisive for successful and sustainable policy dialogue and concludes with recommendations about how the NCCR North-South can increase the effectiveness of its research for development. The publication is a valuable source of inspiration for those interested in better understanding how to generate the multiple benefits of making science relevant to society.
Resumo:
We compared revision and mortality rates of 4668 patients undergoing primary total hip and knee replacement between 1989 and 2007 at a University Hospital in New Zealand. The mean age at the time of surgery was 69 years (16 to 100). A total of 1175 patients (25%) had died at follow-up at a mean of ten years post-operatively. The mean age of those who died within ten years of surgery was 74.4 years (29 to 97) at time of surgery. No change in comorbidity score or age of the patients receiving joint replacement was noted during the study period. No association of revision or death could be proven with higher comorbidity scoring, grade of surgeon, or patient gender. We found that patients younger than 50 years at the time of surgery have a greater chance of requiring a revision than of dying, those around 58 years of age have a 50:50 chance of needing a revision, and in those older than 62 years the prosthesis will normally outlast the patient. Patients over 77 years old have a greater than 90% chance of dying than requiring a revision whereas those around 47 years are on average twice as likely to require a revision than die. This information can be used to rationalise the need for long-term surveillance and during the informed consent process.
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
With the publication of the quality guideline ICH Q9 "Quality Risk Management" by the International Conference on Harmonization, risk management has already become a standard requirement during the life cycle of a pharmaceutical product. Failure mode and effect analysis (FMEA) is a powerful risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to biopharmaceutical processes brings about some difficulties. The proposal presented here is intended to serve as a brief but nevertheless comprehensive and detailed guideline on how to conduct a biopharmaceutical process FMEA. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. The application for such a biopharmaceutical process FMEA is widespread. It can be useful whenever a biopharmaceutical manufacturing process is developed or scaled-up, or when it is transferred to a different manufacturing site. It may also be conducted during substantial optimization of an existing process or the development of a second-generation process. According to their resulting risk ratings, process parameters can be ranked for importance and important variables for process development, characterization, or validation can be identified. LAY ABSTRACT: Health authorities around the world ask pharmaceutical companies to manage risk during development and manufacturing of pharmaceuticals. The so-called failure mode and effect analysis (FMEA) is an established risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to pharmaceutical processes that use modern biotechnology (biopharmaceutical processes) brings about some difficulties, because those biopharmaceutical processes differ from processes in mechanical and electrical industries. The proposal presented here explains how a biopharmaceutical process FMEA can be conducted. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. With the help of this guideline, different details of the manufacturing process can be ranked according to their potential risks, and this can help pharmaceutical companies to identify aspects with high potential risks and to react accordingly to improve the safety of medicines.