29 resultados para design-based inference
em Université de Lausanne, Switzerland
Resumo:
OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
The dentate gyrus is one of only two regions of the mammalian brain where substantial neurogenesis occurs postnatally. However, detailed quantitative information about the postnatal structural maturation of the primate dentate gyrus is meager. We performed design-based, stereological studies of neuron number and size, and volume of the dentate gyrus layers in rhesus macaque monkeys (Macaca mulatta) of different postnatal ages. We found that about 40% of the total number of granule cells observed in mature 5-10-year-old macaque monkeys are added to the granule cell layer postnatally; 25% of these neurons are added within the first three postnatal months. Accordingly, cell proliferation and neurogenesis within the dentate gyrus peak within the first 3 months after birth and remain at an intermediate level between 3 months and at least 1 year of age. Although granule cell bodies undergo their largest increase in size during the first year of life, cell size and the volume of the three layers of the dentate gyrus (i.e. the molecular, granule cell and polymorphic layers) continue to increase beyond 1 year of age. Moreover, the different layers of the dentate gyrus exhibit distinct volumetric changes during postnatal development. Finally, we observe significant levels of cell proliferation, neurogenesis and cell death in the context of an overall stable number of granule cells in mature 5-10-year-old monkeys. These data identify an extended developmental period during which neurogenesis might be modulated to significantly impact the structure and function of the dentate gyrus in adulthood.
Resumo:
BACKGROUND: This integrative review of the literature describes the evolution in knowledge and the paradigm shift that is necessary to switch from advance directives to advance care planning. AIMS AND OBJECTIVES: It presents an analysis of concepts, trends, models and experiments that enables identification of the best treatment strategies, particularly for older people living in nursing homes. DESIGN: Based on 23 articles published between 1999 and 2012, this review distinguishes theoretical from empirical research and presents a classification of studies based on their methodological robustness (descriptive, qualitative, associative or experimental). RESULTS: It thus provides nursing professionals with evidence-based information in the form of a synthetic vision and conceptual framework to support the development of innovative care practices in the end-of-life context. While theoretical work places particular emphasis on the impact of changes in practice on the quality of care received by residents, empirical research highlights the importance of communication between the different persons involved about care preferences at the end of life and the need for agreement between them. CONCLUSIONS: The concept of quality of life and the dimensions and factors that compose it form the basis of Advance care planning (ACP) and enable the identification of the similarities and differences between various actors. They inform professionals of the need to ease off the biomedical approach to consider the attributes prioritised by those concerned, whether patients or families, so as to improve the quality of care at the end of life. IMPLICATIONS FOR PRACTICE: It is particularly recommended that all professionals involved take into account key stakeholders' expectations concerning what is essential at the end of life, to enable enhanced communication and decision-making when faced with this difficult subject.
Resumo:
ABSTRACT The drug discovery process has been profoundly changed recently by the adoption of computational methods helping the design of new drug candidates more rapidly and at lower costs. In silico drug design consists of a collection of tools helping to make rational decisions at the different steps of the drug discovery process, such as the identification of a biomolecular target of therapeutical interest, the selection or the design of new lead compounds and their modification to obtain better affinities, as well as pharmacokinetic and pharmacodynamic properties. Among the different tools available, a particular emphasis is placed in this review on molecular docking, virtual high throughput screening and fragment-based ligand design.
Resumo:
We have previously shown that a 28-amino acid peptide derived from the BRC4 motif of BRCA2 tumor suppressor inhibits selectively human RAD51 recombinase (HsRad51). With the aim of designing better inhibitors for cancer treatment, we combined an in silico docking approach with in vitro biochemical testing to construct a highly efficient chimera peptide from eight existing human BRC motifs. We built a molecular model of all BRC motifs complexed with HsRad51 based on the crystal structure of the BRC4 motif-HsRad51 complex, computed the interaction energy of each residue in each BRC motif, and selected the best amino acid residue at each binding position. This analysis enabled us to propose four amino acid substitutions in the BRC4 motif. Three of these increased the inhibitory effect in vitro, and this effect was found to be additive. We thus obtained a peptide that is about 10 times more efficient in inhibiting HsRad51-ssDNA complex formation than the original peptide.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
The HbpR protein is the sigma54-dependent transcription activator for 2-hydroxybiphenyl degradation in Pseudomonas azelaica. The ability of HbpR and XylR, which share 35% amino acid sequence identity, to cross-activate the PhbpC and Pu promoters was investigated by determining HbpR- or XylR-mediated luciferase expression and by DNA binding assays. XylR measurably activated the PhbpC promoter in the presence of the effector m-xylene, both in Escherichia coli and Pseudomonas putida. HbpR weakly stimulated the Pu promoter in E. coli but not in P. azelaica. Poor HbpR-dependent activation from Pu was caused by a weak binding to the operator region. To create promoters efficiently activated by both regulators, the HbpR binding sites on PhbpC were gradually changed into the XylR binding sites of Pu by site-directed mutagenesis. Inducible luciferase expression from mutated promoters was tested in E. coli on a two plasmid system, and from mono copy gene fusions in P. azelaica and P. putida. Some mutants were efficiently activated by both HbpR and XylR, showing that promoters can be created which are permissive for both regulators. Others achieved a higher XylR-dependent transcription than from Pu itself. Mutants were also obtained which displayed a tenfold lower uninduced expression level by HbpR than the wild-type PhbpC, while keeping the same maximal induction level. On the basis of these results, a dual-responsive bioreporter strain of P. azelaica was created, containing both XylR and HbpR, and activating luciferase expression from the same single promoter independently with m-xylene and 2-hydroxybiphenyl.
Resumo:
Animal dispersal in a fragmented landscape depends on the complex interaction between landscape structure and animal behavior. To better understand how individuals disperse, it is important to explicitly represent the properties of organisms and the landscape in which they move. A common approach to modelling dispersal includes representing the landscape as a grid of equal sized cells and then simulating individual movement as a correlated random walk. This approach uses a priori scale of resolution, which limits the representation of all landscape features and how different dispersal abilities are modelled. We develop a vector-based landscape model coupled with an object-oriented model for animal dispersal. In this spatially explicit dispersal model, landscape features are defined based on their geographic and thematic properties and dispersal is modelled through consideration of an organism's behavior, movement rules and searching strategies (such as visual cues). We present the model's underlying concepts, its ability to adequately represent landscape features and provide simulation of dispersal according to different dispersal abilities. We demonstrate the potential of the model by simulating two virtual species in a real Swiss landscape. This illustrates the model's ability to simulate complex dispersal processes and provides information about dispersal such as colonization probability and spatial distribution of the organism's path
Resumo:
Context: Until now, the testosterone/epitestosterone (T/E) ratio is the main marker for detection of testosterone (T) misuse in athletes. As this marker can be influenced by a number of confounding factors, additional steroid profile parameters indicating T misuse can provide substantiating evidence of doping with endogenous steroids. The evaluation of a steroid profile is currently based upon population statistics. Since large inter-individual variations exist, a paradigm shift towards subject-based references is ongoing in doping analysis. Objective: Proposition of new biomarkers for the detection of testosterone in sports using extensive steroid profiling and an adaptive model based upon Bayesian inference. Subjects: 6 healthy male volunteers were administered with testosterone undecanoate. Population statistics were performed upon steroid profiles from 2014 male Caucasian athletes participating in official sport competition. Design: An extended search for new biomarkers in a comprehensive steroid profile combined with Bayesian inference techniques as used in the Athlete Biological Passport resulted in a selection of additional biomarkers that may improve detection of testosterone misuse in sports. Results: Apart from T/E, 4 other steroid ratios (6α-OH-androstenedione/16α-OH-dehydroepiandrostenedione, 4-OH-androstenedione/16α-OH-androstenedione, 7α-OH-testosterone/7β-OH-dehydroepiandrostenedione and dihydrotestosterone/5β-androstane-3α,17β-diol) were identified as sensitive urinary biomarkers for T misuse. These new biomarkers were rated according to relative response, parameter stability, detection time and discriminative power. Conclusion: Newly selected biomarkers were found suitable for individual referencing within the concept of the Athlete's Biological Passport. The parameters showed improved detection time and discriminative power compared to the T/E ratio. Such biomarkers can support the evidence of doping with small oral doses of testosterone.
Resumo:
In recent years, Business Model Canvas design has evolved from being a paper-based activity to one that involves the use of dedicated computer-aided business model design tools. We propose a set of guidelines to help design more coherent business models. When combined with functionalities offered by CAD tools, they show great potential to improve business model design as an ongoing activity. However, in order to create complex solutions, it is necessary to compare basic business model design tasks, using a CAD system over its paper-based counterpart. To this end, we carried out an experiment to measure user perceptions of both solutions. Performance was evaluated by applying our guidelines to both solutions and then carrying out a comparison of business model designs. Although CAD did not outperform paper-based design, the results are very encouraging for the future of computer-aided business model design.