991 resultados para Error-resilient Applications
Resumo:
Bacteria have long been the targets for genetic manipulation, but more recently they have been synthetically designed to carry out specific tasks. Among the simplest of these tasks is chemical compound and toxicity detection coupled to the production of a quantifiable reporter signal. In this Review, we describe the current design of bacterial bioreporters and their use in a range of assays to measure the presence of harmful chemicals in water, air, soil, food or biological specimens. New trends for integrating synthetic biology and microengineering into the design of bacterial bioreporter platforms are also highlighted.
Resumo:
Raman spectroscopy has become an attractive tool for the analysis of pharmaceutical solid dosage forms. In the present study it is used to ensure the identity of tablets. The two main applications of this method are release of final products in quality control and detection of counterfeits. Twenty-five product families of tablets have been included in the spectral library and a non-linear classification method, the Support Vector Machines (SVMs), has been employed. Two calibrations have been developed in cascade: the first one identifies the product family while the second one specifies the formulation. A product family comprises different formulations that have the same active pharmaceutical ingredient (API) but in a different amount. Once the tablets have been classified by the SVM model, API peaks detection and correlation are applied in order to have a specific method for the identification and allow in the future to discriminate counterfeits from genuine products. This calibration strategy enables the identification of 25 product families without error and in the absence of prior information about the sample. Raman spectroscopy coupled with chemometrics is therefore a fast and accurate tool for the identification of pharmaceutical tablets.
Resumo:
Aquest projecte és l’exemplificació del punt on la teoria i la pràctica s’uneixen. Mostra com la tasca socioeducativa que es duu a terme a un recurs on es dóna atenció residencial a joves (majors d’edat) amb una mesura judicial de règim obert, pot encaixar perfectament amb el que teoria de la resiliència argumenta. S’utilitza el model resilient de la casita el qual es fa servir de cinc possibles àrees d’intervenció. Es tracta d’un estudi on es justifiquen les raons de per què el Pis de Joves d’Emancipació pot ser definit com una Institució Resilient
Resumo:
Through this study, we will measure how the collective MPI operations behaves in virtual and physical clusters, and its impact on the application performance. As we stated before, we will use as a test case the Weather Research and Forecasting simulations.
Resumo:
Le concept de test relationnel (test, en anglais ; Weiss et Sampson, 1986 [16]) est présenté. Ses origines dans les écrits de Freud sont brièvement retracées et son inscription dans la théorie des croyances pathogènes de Weiss présentée. Par ailleurs, les autres éléments de la théorie psychanalytique de Weiss sont présentés (buts thérapeutiques, obstacles, traumas, insight, test relationnel). Toutes ces étapes sont illustrées par des exemples tirés de la littérature. Un développement récent du concept de test relationnel est présenté et appliqué à la psychothérapie des troubles de la personnalité (Sachse, 2003 [14]). Finalement, les auteurs donnent deux brefs exemples de tests relationnels tirés de leur propre pratique de psychothérapeute et discutent des modèles en les comparant entre eux. Des conclusions concernant l'utilité du concept de test relationnel pour la pratique psychothérapeutique et la recherche en psychothérapie sont proposées. The test concept (Weiss and Sampson, 1986 [16]) is presented. Its origins in Freud's works are briefly evoked and its place within the theory of pathogenic beliefs by Weiss presented. We present also the remaining elements of Weiss' psychoanalytic theory which are objectives, obstacles, traumas and insight. Every step of the reflection is illustrated with case examples, drawn from the literature. A recent development of the test concept is presented and applied to the psychotherapy of personality disorders (Sachse, 2003 [14]). Finally, the authors give brief examples of tests having occurred in their own practice as psychotherapists and discuss the models by comparing them among each other. Conclusions are drawn concerning the usefulness of the test concept for psychotherapy practice and research.
Resumo:
BACKGROUND Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. METHODS Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. DISCUSSION This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
Es descriu l'aproximació de Capes Atòmiques dins de la teoria de la Semblança Molecular Quàntica. Partint només de dades teòriques, s'ha trobat una relació entre estructura molecular i activitat biològica per a diversos conjunts de molècules. Es descriuen els aspectes teòrics de la Semblança Molecular Quàntica i alguns exemples d'aplicació
Resumo:
Natural populations are of finite size and organisms carry multilocus genotypes. There are, nevertheless, few results on multilocus models when both random genetic drift and natural selection affect the evolutionary dynamics. In this paper we describe a formalism to calculate systematic perturbation expansions of moments of allelic states around neutrality in populations of constant size. This allows us to evaluate multilocus fixation probabilities (long-term limits of the moments) under arbitrary strength of selection and gene action. We show that such fixation probabilities can be expressed in terms of selection coefficients weighted by mean first passages times of ancestral gene lineages within a single ancestor. These passage times extend the coalescence times that weight selection coefficients in one-locus perturbation formulas for fixation probabilities. We then apply these results to investigate the Hill-Robertson effect and the coevolution of helping and punishment. Finally, we discuss limitations and strengths of the perturbation approach. In particular, it provides accurate approximations for fixation probabilities for weak selection regimes only (Ns < or = 1), but it provides generally good prediction for the direction of selection under frequency-dependent selection.
Resumo:
Non-alcoholic fatty liver disease (NAFLD) is an emerging health concern in both developed and non-developed world, encompassing from simple steatosis to non-alcoholic steatohepatitis (NASH), cirrhosis and liver cancer. Incidence and prevalence of this disease are increasing due to the socioeconomic transition and change to harmful diet. Currently, gold standard method in NAFLD diagnosis is liver biopsy, despite complications and lack of accuracy due to sampling error. Further, pathogenesis of NAFLD is not fully understood, but is well-known that obesity, diabetes and metabolic derangements played a major role in disease development and progression. Besides, gut microbioma and host genetic and epigenetic background could explain considerable interindividual variability. Knowledge that epigenetics, heritable events not caused by changes in DNA sequence, contribute to development of diseases has been a revolution in the last few years. Recently, evidences are accumulating revealing the important role of epigenetics in NAFLD pathogenesis and in NASH genesis. Histone modifications, changes in DNA methylation and aberrant profiles or microRNAs could boost development of NAFLD and transition into clinical relevant status. PNPLA3 genotype GG has been associated with a more progressive disease and epigenetics could modulate this effect. The impact of epigenetic on NAFLD progression could deserve further applications on therapeutic targets together with future non-invasive methods useful for the diagnosis and staging of NAFLD.
Resumo:
This book gives a general view of sequence analysis, the statistical study of successions of states or events. It includes innovative contributions on life course studies, transitions into and out of employment, contemporaneous and historical careers, and political trajectories. The approach presented in this book is now central to the life-course perspective and the study of social processes more generally. This volume promotes the dialogue between approaches to sequence analysis that developed separately, within traditions contrasted in space and disciplines. It includes the latest developments in sequential concepts, coding, atypical datasets and time patterns, optimal matching and alternative algorithms, survey optimization, and visualization. Field studies include original sequential material related to parenting in 19th-century Belgium, higher education and work in Finland and Italy, family formation before and after German reunification, French Jews persecuted in occupied France, long-term trends in electoral participation, and regime democratization. Overall the book reassesses the classical uses of sequences and it promotes new ways of collecting, formatting, representing and processing them. The introduction provides basic sequential concepts and tools, as well as a history of the method. Chapters are presented in a way that is both accessible to the beginner and informative to the expert.
Resumo:
L'objectiu fonamental d'aquest treball és implementar un sistema de base de dades per donar resposta a la necessitat dels desenvolupadors d'aplicacions mòbils a nivell mundial per unificar i millorar l'experiència dels seus usuaris a l'hora de descarregar les seves aplicacions.