955 resultados para Robustness Analysis
Resumo:
Regulatory authorities, the food industry and the consumer demand reliable determination of chemical contaminants present in foods. A relatively new analytical technique that addresses this need is an immunobiosensor based on surface plasmon resonance (SPR) measurements. Although a range of tests have been developed to measure residues in milk, meat, animal bile and honey, a considerable problem has been encountered with both serum and plasma samples. The high degree of non-specific binding of some sample components can lead to loss of assay robustness, increased rates of false positives and general loss of assay sensitivity. In this paper we describe a straightforward precipitation technique to remove interfering substances from serum samples to be analysed for veterinary anthelmintics by SPR. This technique enabled development of an assay to detect a wide range of benzimidazole residues in serum samples by immunobiosensor. The limit of quantification was below 5 ng/ml and coefficients of variation were about 2%.
Resumo:
High effectiveness and leanness of modern supply chains (SCs) increase their vulnerability, i.e. susceptibility to disturbances reflected in non-robust SC performances. Both the SC management literature and SC professionals indicate the need for the development of SC vulnerability assessment tools. In this article, a new method for vulnerability assessment, the VULA method, is presented. The VULA method helps to identify how much a company would underperform on a specific Key Performance Indicator in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision about whether process redesign is appropriate and what kind of redesign strategies should be used in order to increase the SC's robustness. The applicability of the VULA method is demonstrated in the context of a meat SC using discrete-event simulation to conduct the performance analysis.
Resumo:
Introduction: Amplicon deep-sequencing using second-generation sequencing technology is an innovative molecular diagnostic technique and enables a highly-sensitive detection of mutations. As an international consortium we had investigated previously the robustness, precision, and reproducibility of 454 amplicon next-generation sequencing (NGS) across 10 laboratories from 8 countries (Leukemia, 2011;25:1840-8).
Aims: In Phase II of the study, we established distinct working groups for various hematological malignancies, i.e. acute myeloid leukemia (AML), acute lymphoblastic leukemia (ALL), chronic lymphocytic leukemia (CLL), chronic myelogenous leukemia (CML), myelodysplastic syndromes (MDS), myeloproliferative neoplasms (MPN), and multiple myeloma. Currently, 27 laboratories from 13 countries are part of this research consortium. In total, 74 gene targets were selected by the working groups and amplicons were developed for a NGS deep-sequencing assay (454 Life Sciences, Branford, CT). A data analysis pipeline was developed to standardize mutation interpretation both for accessing raw data (Roche Amplicon Variant Analyzer, 454 Life Sciences) and variant interpretation (Sequence Pilot, JSI Medical Systems, Kippenheim, Germany).
Results: We will report on the design, standardization, quality control aspects, landscape of mutations, as well as the prognostic and predictive utility of this assay in a cohort of 8,867 cases. Overall, 1,146 primer sequences were designed and tested. In detail, for example in AML, 924 cases had been screened for CEBPA mutations. RUNX1 mutations were analyzed in 1,888 cases applying the deep-sequencing read counts to study the stability of such mutations at relapse and their utility as a biomarker to detect residual disease. Analyses of DNMT3A (n=1,041) were focused to perform landscape investigations and to address the prognostic relevance. Additionally, this working group is focusing on TET2, ASXL1, and TP53 analyses. A novel prognostic model is being developed allowing stratification of AML into prognostic subgroups based on molecular markers only. In ALL, 1,124 pediatric and adult cases have been screened, including 763 assays for TP53 mutations both at diagnosis and relapse of ALL. Pediatric and adult leukemia expert labs developed additional content to study the mutation incidence of other B and T lineage markers such as IKZF1, JAK2, IL7R, PAX5, EP300, LEF1, CRLF2, PHF6, WT1, JAK1, PTEN, AKT1, IL7R, NOTCH1, CREBBP, or FBXW7. Further, the molecular landscape of CLL is changing rapidly. As such, a separate working group focused on analyses including NOTCH1, SF3B1, MYD88, XPO1, FBXW7 and BIRC3. Currently, 922 cases were screened to investigate the range of mutational burden of NOTCH1 mutations for their prognostic relevance. In MDS, RUNX1 mutation analyses were performed in 977 cases. The prognostic relevance of TP53 mutations in MDS was assessed in additional 327 cases, including isolated deletions of chromosome 5q. Next, content was developed targeting genes of the cellular splicing component, e.g. SF3B1, SRSF2, U2AF1, and ZRSR2. In BCR-ABL1-negative MPN, nine genes of interest (JAK2, MPL, TET2, CBL, KRAS, EZH2, IDH1, IDH2, ASXL1) have been analyzed in a cohort of 155 primary myelofibrosis cases searching for novel somatic mutations and addressing their relevance for disease progression and leukemia transformation. Moreover, an assay was developed and applied to CMML cases allowing the simultaneous analysis of 25 leukemia-associated target genes in a single sequencing run using just 20 ng of starting DNA. Finally, nine laboratories are studying CML, applying ultra-deep sequencing of the BCR-ABL1 tyrosine kinase domain. Analyses were performed on 615 cases investigating the dynamics of expansion of mutated clones under various tyrosine kinase inhibitor therapies.
Conclusion: Molecular characterization of hematological malignancies today requires high diagnostic sensitivity and specificity. As part of the IRON-II study, a network of laboratories analyzed a variety of disease entities applying amplicon-based NGS assays. Importantly, the consortium not only standardized assay design for disease-specific panels, but also achieved consensus on a common data analysis pipeline for mutation interpretation. Distinct working groups have been forged to address scientific tasks and in total 8,867 cases had been analyzed thus far.
Resumo:
The study details the development of a fully validated, rapid and portable sensor based method for the on-site analysis of microcystins in freshwater samples. The process employs a novel lysis method for the mechanical lysis of cyanobacterial cells, with glass beads and a handheld frother in only 10min. The assay utilises an innovative planar waveguide device that, via an evanescent wave excites fluorescent probes, for amplification of signal in a competitive immunoassay, using an anti-microcystin monoclonal with cross-reactivity against the most common, and toxic variants. Validation of the assay showed the limit of detection (LOD) to be 0.78ngmL and the CCß to be 1ngmL. Robustness of the assay was demonstrated by intra- and inter-assay testing. Intra-assay analysis had % C.V.s between 8 and 26% and recoveries between 73 and 101%, with inter-assay analysis demonstrating % C.V.s between 5 and 14% and recoveries between 78 and 91%. Comparison with LC-MS/MS showed a high correlation (R=0.9954) between the calculated concentrations of 5 different Microcystis aeruginosa cultures for total microcystin content. Total microcystin content was ascertained by the individual measurement of free and cell-bound microcystins. Free microcystins can be measured to 1ngmL, and with a 10-fold concentration step in the intracellular microcystin protocol (which brings the sample within the range of the calibration curve), intracellular pools may be determined to 0.1ngmL. This allows the determination of microcystins at and below the World Health Organisation (WHO) guideline value of 1µgL. This sensor represents a major advancement in portable analysis capabilities and has the potential for numerous other applications.
Resumo:
In this article the multibody simulation software package MADYMO for analysing and optimizing occupant safety design was used to model crash tests for Normal Containment barriers in accordance with EN 1317. The verification process was carried out by simulating a TB31 and a TB32 crash test performed on vertical portable concrete barriers and by comparing the numerical results to those obtained experimentally. The same modelling approach was applied to both tests to evaluate the predictive capacity of the modelling at two different impact speeds. A sensitivity analysis of the vehicle stiffness was also carried out. The capacity to predict all of the principal EN1317 criteria was assessed for the first time: the acceleration severity index, the theoretical head impact velocity, the barrier working width and the vehicle exit box. Results showed a maximum error of 6% for the acceleration severity index and 21% for theoretical head impact velocity for the numerical simulation in comparison to the recorded data. The exit box position was predicted with a maximum error of 4°. For the working width, a large percentage difference was observed for test TB31 due to the small absolute value of the barrier deflection but the results were well within the limit value from the standard for both tests. The sensitivity analysis showed the robustness of the modelling with respect to contact stiffness increase of ±20% and ±40%. This is the first multibody model of portable concrete barriers that can reproduce not only the acceleration severity index but all the test criteria of EN 1317 and is therefore a valuable tool for new product development and for injury biomechanics research.
Resumo:
We study the sensitivity of a MAP configuration of a discrete probabilistic graphical model with respect to perturbations of its parameters. These perturbations are global, in the sense that simultaneous perturbations of all the parameters (or any chosen subset of them) are allowed. Our main contribution is an exact algorithm that can check whether the MAP configuration is robust with respect to given perturbations. Its complexity is essentially the same as that of obtaining the MAP configuration itself, so it can be promptly used with minimal effort. We use our algorithm to identify the largest global perturbation that does not induce a change in the MAP configuration, and we successfully apply this robustness measure in two practical scenarios: the prediction of facial action units with posed images and the classification of multiple real public data sets. A strong correlation between the proposed robustness measure and accuracy is verified in both scenarios.
Resumo:
This research provides new insights into the measurement of students’ authorial identity and its potential for minimising the incidence of unintentional plagiarism by providing evidence about the psychometric properties of the Student Authorship Questionnaire (SAQ). Exploratory and confirmatory factor analyses (EFA and CFA) are employed to investigate the measurement properties of the scales which comprise the SAQ using data collected from accounting students. The results provide limited psychometric support in favour of the factorial structure of the SAQ and raise a number of questions regarding the instrument’s robustness and generalisability across disciplines. An alternative model derived from the EFA outperforms the SAQ model with regard to its psychometric properties. Explanations for these findings are proffered and avenues for future research suggested.
Resumo:
We present a robust Dirichlet process for estimating survival functions from samples with right-censored data. It adopts a prior near-ignorance approach to avoid almost any assumption about the distribution of the population lifetimes, as well as the need of eliciting an infinite dimensional parameter (in case of lack of prior information), as it happens with the usual Dirichlet process prior. We show how such model can be used to derive robust inferences from right-censored lifetime data. Robustness is due to the identification of the decisions that are prior-dependent, and can be interpreted as an analysis of sensitivity with respect to the hypothetical inclusion of fictitious new samples in the data. In particular, we derive a nonparametric estimator of the survival probability and a hypothesis test about the probability that the lifetime of an individual from one population is shorter than the lifetime of an individual from another. We evaluate these ideas on simulated data and on the Australian AIDS survival dataset. The methods are publicly available through an easy-to-use R package.
Resumo:
As wind power generation undergoes rapid growth, new technical challenges emerge: dynamic stability and power quality. The influence of wind speed disturbances and a pitch control malfunction on the quality of the energy injected into the electric grid is studied for variable-speed wind turbines with different power-electronic converter topologies. Additionally, a new control strategy is proposed for the variable-speed operation of wind turbines with permanent magnet synchronous generators. The performance of disturbance attenuation and system robustness is ascertained. Simulation results are presented and conclusions are duly drawn. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
Cet article étudie la sensibilité des estimations de certaines variables explicatives de la croissance économique dans des régressions en coupe transversale sur un ensemble de pays. Il applique un modèle modifié de l’analyse de sensibilité de Leamer (1983, 1985). Mes résultats confirment la conclusion de Levine and Renelt (1992), toutefois, je montre que plus de variables sont solidement corrélées à la croissance économique. Entre 1990-2010, je trouve que huit sur vingt cinq variables ont des coefficients significatifs et sont solidement corrélées à la croissance de long terme, notamment, les parts de l’investissement et des dépenses étatiques dans le PIB, la primauté du droit et une variable dichotomique pour les pays subsahariens. Je trouve aussi une preuve empirique solide de l'hypothèse de la convergence conditionnelle, ce qui est cohérent avec le modèle de croissance néoclassique.
Resumo:
L’analyse de la marche a émergé comme l’un des domaines médicaux le plus im- portants récemment. Les systèmes à base de marqueurs sont les méthodes les plus fa- vorisées par l’évaluation du mouvement humain et l’analyse de la marche, cependant, ces systèmes nécessitent des équipements et de l’expertise spécifiques et sont lourds, coûteux et difficiles à utiliser. De nombreuses approches récentes basées sur la vision par ordinateur ont été développées pour réduire le coût des systèmes de capture de mou- vement tout en assurant un résultat de haute précision. Dans cette thèse, nous présentons notre nouveau système d’analyse de la démarche à faible coût, qui est composé de deux caméras vidéo monoculaire placées sur le côté gauche et droit d’un tapis roulant. Chaque modèle 2D de la moitié du squelette humain est reconstruit à partir de chaque vue sur la base de la segmentation dynamique de la couleur, l’analyse de la marche est alors effectuée sur ces deux modèles. La validation avec l’état de l’art basée sur la vision du système de capture de mouvement (en utilisant le Microsoft Kinect) et la réalité du ter- rain (avec des marqueurs) a été faite pour démontrer la robustesse et l’efficacité de notre système. L’erreur moyenne de l’estimation du modèle de squelette humain par rapport à la réalité du terrain entre notre méthode vs Kinect est très prometteur: les joints des angles de cuisses (6,29◦ contre 9,68◦), jambes (7,68◦ contre 11,47◦), pieds (6,14◦ contre 13,63◦), la longueur de la foulée (6.14cm rapport de 13.63cm) sont meilleurs et plus stables que ceux de la Kinect, alors que le système peut maintenir une précision assez proche de la Kinect pour les bras (7,29◦ contre 6,12◦), les bras inférieurs (8,33◦ contre 8,04◦), et le torse (8,69◦contre 6,47◦). Basé sur le modèle de squelette obtenu par chaque méthode, nous avons réalisé une étude de symétrie sur différentes articulations (coude, genou et cheville) en utilisant chaque méthode sur trois sujets différents pour voir quelle méthode permet de distinguer plus efficacement la caractéristique symétrie / asymétrie de la marche. Dans notre test, notre système a un angle de genou au maximum de 8,97◦ et 13,86◦ pour des promenades normale et asymétrique respectivement, tandis que la Kinect a donné 10,58◦et 11,94◦. Par rapport à la réalité de terrain, 7,64◦et 14,34◦, notre système a montré une plus grande précision et pouvoir discriminant entre les deux cas.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
The World Bank Report 2012 starts with this statement: “Gender equality matters in itself andit matters for development because, in today’s globalized worlds, countries that use the skillsand talents of their women would have an advantage over those which do not use it.” With theframe that suggest that gender equality matters, this paper describes some policy alternativesoriented to overcome gender disadvantages in the formal labor market incorporation of theurban middle class women in Colombia. On balance, the final recommendation suggest that itis desirable to adopt policy alternatives as Community Centers, which are programs orientedto a social redistribution of the domestic work as a way to encourage women participationin the formal labor market with the social support of the members of their own community.The problem that the social policy needs to address is the segregation of women in the formallabor market in Colombia. Although the evidence shows that the women overcome theeducational gap by showing better performance in education that their male peers, womenare still segregated of the labor market. The persistence of high rates of unemployment on thefemale population, the prevalence of the informal labor market as a women labor market, andthe presence of the payment difference between men and women with similar professionaltrainings are circumstances that sustain the segregation statement. These circumstances areinefficient for the society because an economic analysis shows that the cost of maintain the statuquo is externalized in the social security system that includes health, pension and maternityleave regimens. Therefore, the women segregation involves a market failure.This paper evaluates five policy alternatives each directed to the progress of a different causaldimension of the problem: (i) Quotas in the private market, (ii) Flexible working hours,(iii) replace the maternity leave with a family leave, (iv) Increase the Community Centers forredistributing the care work, and (v) Equal payment enforcement. The first alternative looksto increase women’s participation in the formal labor market. The second, third, and fourthalternatives constitute a package addressed at redistributing care work by reducing women’sresponsibility for reproductive work in the household with the help of husbands and the localgovernment. The fifth alternative intervenes to resolve the equal payment problem.After a four criteria evaluation that measure effectiveness, robustness and improbability inimplementation, efficiency and political acceptability or social opposition, the strongest alternativeis the fostering of Community Centers that promote a redistribution of care work. Thispolicy performs well in the assessment process because it combines gender focus with importantindirect effects: child support and human capabilities. The policy also shows a bottomup implementation process that overcomes the main adoption difficulties in the gender focusprograms and is supported by strong evidence of success in the Colombian context; this evidenceis produced by both transnational actors as a World Bank and also in local accountabilityreporters executed by local institutions like Colombian Institute of Family Welfare (ICBF).
Resumo:
A reference model of Fallible Endgame Play has been implemented and exercised with the chess-engine WILHELM. Past experiments have demonstrated the value of the model and the robustness of decisions based on it: experiments agree well with a Markov Model theory. Here, the reference model is exercised on the well-known endgame KBBKN.