871 resultados para Rejection-sampling Algorithm
Resumo:
The atomic force microscope is not only a very convenient tool for studying the topography of different samples, but it can also be used to measure specific binding forces between molecules. For this purpose, one type of molecule is attached to the tip and the other one to the substrate. Approaching the tip to the substrate allows the molecules to bind together. Retracting the tip breaks the newly formed bond. The rupture of a specific bond appears in the force-distance curves as a spike from which the binding force can be deduced. In this article we present an algorithm to automatically process force-distance curves in order to obtain bond strength histograms. The algorithm is based on a fuzzy logic approach that permits an evaluation of "quality" for every event and makes the detection procedure much faster compared to a manual selection. In this article, the software has been applied to measure the binding strength between tubuline and microtubuline associated proteins.
Resumo:
This paper presents a framework in which samples of bowing gesture parameters are retrieved and concatenated from a database of violin performances by attending to an annotated input score. Resulting bowing parameter signals are then used to synthesize sound by means of both a digital waveguide violin physical model, and an spectral-domainadditive synthesizer.
Resumo:
In the early 1900s, the wolf (Canis lupus) was extirpated from France and Switzerland. There is growing evidence that the species is presently recolonizing these countries in the western Alps. By sequencing the mitochondrial DNA (mtDNA) control region of various samples mainly collected in the field (scats, hairs, regurgitates, blood or tissue; n = 292), we could (1) develop a non-invasive method enabling the unambiguous attribution of these samples to wolf, fox (Vulpes vulpes) or dog (Canis familiaris), among others; (2) demonstrate that Italian, French and Swiss wolves share the same mtDNA haplotype, a haplotype that has never been found in any other wolf population world-wide. Combined together, field and genetic data collected over 10 years corroborate the scenario of a natural expansion of wolves from the Italian source population. Furthermore, such a genetic approach is of conservation significance, since it has important consequences for management decisions. This first long-term report using non-invasive sampling demonstrates that long-distance dispersers are common, supporting the hypothesis that individuals may often attempt to colonize far from their native pack, even in the absence of suitable corridors across habitats characterized by intense human activities.
Resumo:
Background: The first AO comprehensive pediatric long bone fracture classification system has been established following a structured path of development and validation with experienced pediatric surgeons. Methods: A follow-up series of agreement studies was applied to specify and evaluate a grading system for displacement of pediatric supracondylar fractures. An iterative process comprising an international group of 5 experienced pediatric surgeons (Phase 1) followed by a pragmatic multicenter agreement study involving 26 raters (Phase 2) was used. The last evaluations were conducted on a consecutive collection of 154 supracondylar fractures documented by standard anteroposterior and lateral radiographs. Results: Fractures were classified according to 1 of 4 grades: I = incomplete fracture with no or minimal displacement; II = Incomplete fracture with continuity of the posterior (extension fracture) or anterior cortex (flexion fracture); III = lack of bone continuity (broken cortex), but still some contact between the fracture planes; IV = complete fracture with no bone continuity (broken cortex), and no contact between the fracture planes. A diagnostic algorithm to support the practical application of the grading system in a clinical setting, as well as an aid using a circle placed over the capitellum was proposed. The overall kappa coefficients were 0.68 and 0.61 in the Phase 1 and Phase 2 studies, respectively. In the Phase 1 study, fracture grades I, II, III, and IV were classified with median accuracies of 91%, 82%, 83%, and 99.5%, respectively. Similar median accuracies of 86% (Grade I), 73% (Grade II), 83%(Grade III), and 92% were reported for the Phase 2 study. Reliability was high in distinguishing complete, unstable fractures from stable injuries [ie, kappa coefficients of 0.84 (Phase 1) and 0.83 (Phase 2) were calculated]; in Phase 2, surgeons' accuracies in classifying complete fractures were all above 85%. Conclusions: With clear and unambiguous definition, this new grading system for supracondylar fracture displacement has proved to be sufficiently reliable and accurate when applied by pediatric surgeons in the framework of clinical routine as well as research.
Resumo:
Transplant glomerulopathy (TG) has received much attention in recent years as a symptom of chronic humoral rejection; however, many cases lack C4d deposition and/or circulating donor-specific antibodies (DSAs). To determine the contribution of other causes, we studied 209 consecutive renal allograft indication biopsies for chronic allograft dysfunction, of which 25 met the pathological criteria of TG. Three partially overlapping etiologies accounted for 21 (84%) cases: C4d-positive (48%), hepatitis C-positive (36%), and thrombotic microangiopathy (TMA)-positive (32%) TG. The majority of patients with confirmed TMA were also hepatitis C positive, and the majority of hepatitis C-positive patients had TMA. DSAs were significantly associated with C4d-positive but not with hepatitis C-positive TG. The prevalence of hepatitis C was significantly higher in the TG group than in 29 control patients. Within the TG cohort, those who were hepatitis C-positive developed allograft failure significantly earlier than hepatitis C-negative patients. Thus, TG is not a specific diagnosis but a pattern of pathological injury involving three major overlapping pathways. It is important to distinguish these mechanisms, as they may have different prognostic and therapeutic implications.
Resumo:
BACKGROUND: In heart transplantation, antibody-mediated rejection (AMR) is diagnosed and graded on the basis of immunopathologic (C4d-CD68) and histopathologic criteria found on endomyocardial biopsies (EMB). Because some pathologic AMR (pAMR) grades may be associated with clinical AMR, and because humoral responses may be affected by the intensity of immunosuppression during the first posttransplantation year, we investigated the incidence and positive predictive values (PPV) of C4d-CD68 and pAMR grades for clinical AMR as a function of time. METHODS: All 564 EMB from 40 adult heart recipients were graded for pAMR during the first posttransplantation year. Clinical AMR was diagnosed by simultaneous occurrence of pAMR on EMB, donor specific antibodies and allograft dysfunction. RESULTS: One patient demonstrated clinical AMR at postoperative day 7 and one at 6 months (1-year incidence 5%). C4d-CD68 was found on 4,7% EMB with a "decrescendo" pattern over time (7% during the first 4 months vs. 1.2% during the last 8 months; P < 0.05). Histopathologic criteria of AMR occurred on 10.3% EMB with no particular time pattern. Only the infrequent (1.4%) pAMR2 grade (simultaneous histopathologic and immunopathologic markers) was predictive for clinical AMR, particularly after the initial postoperative period (first 4 months and last 8 months PPV = 33%-100%; P < 0.05). CONCLUSION: In the first posttransplantation year, AMR immunopathologic and histopathologic markers were relatively frequent, but only their simultaneous occurrence (pAMR2) was predictive of clinical AMR. Furthermore, posttransplantation time may modulate the occurrence of C4d-CD68 on EMB and thus the incidence of pAMR2 and its relevance to the diagnosis of clinical AMR.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
It is well known the relationship between source separation and blind deconvolution: If a filtered version of an unknown i.i.d. signal is observed, temporal independence between samples can be used to retrieve the original signal, in the same manner as spatial independence is used for source separation. In this paper we propose the use of a Genetic Algorithm (GA) to blindly invert linear channels. The use of GA is justified in the case of small number of samples, where other gradient-like methods fails because of poor estimation of statistics.
Resumo:
Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
The Iowa Department of Natural Resources uses benthic macroinvertebrate and fish sampling data to assess stream biological condition and the support status of designated aquatic life uses (Wilton 2004; IDNR 2013). Stream physical habitat data assist with the interpretation of biological sampling results by quantifying important physical characteristics that influence a stream’s ability to support a healthy aquatic community (Heitke et al., 2006; Rowe et al. 2009; Sindt et al., 2012). This document describes aquatic community sampling and physical habitat assessment procedures currently followed in the Iowa stream biological assessment program. Standardized biological sampling and physical habitat assessment procedures were first established following a pilot sampling study in 1994 (IDNR 1994a, 1994b). The procedure documents were last updated in 2001 (IDNR 2001a; 2001b). The biological sampling and physical habitat assessment procedures described below are evaluated on a continual basis. Revision of this working document will occur periodically to reflect additional changes.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
Summary of water monitoring conducted by the City of Bondurant and Bondurant-Farrar school students of sites in and around Bondurant.