990 resultados para 1 sigma error
Resumo:
This paper presents a new respiratory impedance estimator to minimize the error due to breathing. Its practical reliability was evaluated in a simulation using realistic signals. These signals were generated by superposing pressure and flow records obtained in two conditions: 1) when applying forced oscillation to a resistance- inertance- elastance (RIE) mechanical model; 2) when healthy subjects breathed through the unexcited forced oscillation generator. Impedances computed (4-32 Hz) from the simulated signals with the new estimator resulted in a mean value which was scarcely biased by the added breathing (errors less than 1 percent in the mean R, I , and E ) and had a small variability (coefficients of variation of R, I, and E of 1.3, 3.5, and 9.6 percent, respectively). Our results suggest that the proposed estimator reduces the error in measurement of respiratory impedance without appreciable extracomputational cost.
Resumo:
A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.
Resumo:
Real time glycemia is a cornerstone for metabolic research, particularly when performing oral glucose tolerance tests (OGTT) or glucose clamps. From 1965 to 2009, the gold standard device for real time plasma glucose assessment was the Beckman glucose analyzer 2 (Beckman Instruments, Fullerton, CA), which technology couples glucose oxidase enzymatic assay with oxygen sensors. Since its discontinuation in 2009, today's researchers are left with few choices that utilize glucose oxidase technology. The first one is the YSI 2300 (Yellow Springs Instruments Corp., Yellow Springs, OH), known to be as accurate as the Beckman(1). The YSI has been used extensively for clinical research studies and is used to validate other glucose monitoring devices(2). The major drawback of the YSI is that it is relatively slow and requires high maintenance. The Analox GM9 (Analox instruments, London), more recent and faster, is increasingly used in clinical research(3) as well as in basic sciences(4) (e.g. 23 papers in Diabetes or 21 in Diabetologia). This article is protected by copyright. All rights reserved.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Contrast enhancement is an image processing technique where the objective is to preprocess the image so that relevant information can be either seen or further processed more reliably. These techniques are typically applied when the image itself or the device used for image reproduction provides poor visibility and distinguishability of different regions of interest inthe image. In most studies, the emphasis is on the visualization of image data,but this human observer biased goal often results to images which are not optimal for automated processing. The main contribution of this study is to express the contrast enhancement as a mapping from N-channel image data to 1-channel gray-level image, and to devise a projection method which results to an image with minimal error to the correct contrast image. The projection, the minimum-error contrast image, possess the optimal contrast between the regions of interest in the image. The method is based on estimation of the probability density distributions of the region values, and it employs Bayesian inference to establish the minimum error projection.
Resumo:
Normally either the Güntelberg or Davies equation is used to predict activity coefficients of electrolytes in dilute solutions when no betterequation is available. The validity of these equations and, additionally, of the parameter-free equation used in the Bates-Guggenheim convention for activity coefficients were tested with experimentally determined activity coefficients of LaCl3, CaCl2, SrCl2 and BaCl2 in aqueous solutions at 298.15 K. The experimentalactivity coefficients of these electrolytes can be usually reproduced within experimental error by means of a two-parameter equation of the Hückel type. The best Hückel equations were also determined for all electrolytes considered. The data used in the calculations of this study cover almost all reliable galvanic cell results available in the literature for the electrolytes considered. The results of the calculations reveal that the parameter-free activity coefficient equations can only be used for very dilute electrolyte solutions in thermodynamic studies
Resumo:
La necesidad de evaluar la evapotranspiración a escala regional para la gestión de regadíos ha hecho que sean innumerables los intentos por aplicar imágenes AVHRR-NOAA en la determinación el flujo de calor sensible. La principal limitación de estos métodos es la estimación de la resistencia aerodinámica. El parámetro crítico en la expresión de la resistencia aerodinámica es kB-1. La parametrización de kB-1 ha sido infructuosa a escala regional por no disponer hasta ahora de medidas de flujo de calor sensible a escala del píxel AVHRR en superficies heterogéneas y durante toda una temporada de riegos. Para resolver esta medida de flujo se ha desarrollado el cintilómetro. En la primera parte de este trabajo se estudia la representatividad espacial de las medidas del cintilómetro. El núcleo de esta aportación consiste en la correlación entre el parámetro kB-1, el NDVI y la altura solar. Los buenos resultados obtenidos (r2=0.81) ofrecen una nueva metodología para determinar el flujo de calor sensible. La estimación de kB-1, las imágenes AVHRR y los datos meteorológicos permiten calcular el flujo de calor sensible durante toda la temporada de riegos con errores inferiores al 20%.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
ABSTRACTObjective:to assess the impact of the shift inlet trauma patients, who underwent surgery, in-hospital mortality.Methods:a retrospective observational cohort study from November 2011 to March 2012, with data collected through electronic medical records. The following variables were statistically analyzed: age, gender, city of origin, marital status, admission to the risk classification (based on the Manchester Protocol), degree of contamination, time / admission round, admission day and hospital outcome.Results:during the study period, 563 patients injured victims underwent surgery, with a mean age of 35.5 years (± 20.7), 422 (75%) were male, with 276 (49.9%) received in the night shift and 205 (36.4%) on weekends. Patients admitted at night and on weekends had higher mortality [19 (6.9%) vs. 6 (2.2%), p=0.014, and 11 (5.4%) vs. 14 (3.9%), p=0.014, respectively]. In the multivariate analysis, independent predictors of mortality were the night admission (OR 3.15), the red risk classification (OR 4.87), and age (OR 1.17).Conclusion:the admission of night shift and weekend patients was associated with more severe and presented higher mortality rate. Admission to the night shift was an independent factor of surgical mortality in trauma patients, along with the red risk classification and age.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
This article deals with a contour error controller (CEC) applied in a high speed biaxial table. It works simultaneously with the table axes controllers, helping them. In the early stages of the investigation, it was observed that its main problem is imprecision when tracking non-linear contours at high speeds. The objectives of this work are to show that this problem is caused by the lack of exactness of the contour error mathematical model and to propose modifications in it. An additional term is included, resulting in a more accurate value of the contour error, enabling the use of this type of motion controller at higher feedrate. The response results from simulated and experimental tests are compared with those of common PID and non-corrected CEC in order to analyse the effectiveness of this controller over the system. The main conclusions are that the proposed contour error mathematical model is simple, accurate, almost insensible to the feedrate and that a 20:1 reduction of the integral absolute contour error is possible.
Resumo:
Pregnancy loss can be caused by several factors involved in human reproduction. Although up to 50% of cases remain unexplained, it has been postulated that the major cause of failed pregnancy is an error of embryo implantation. Transmembrane mucin-1 (MUC-1) is a glycoprotein expressed on the endometrial cell surface which acts as a barrier to implantation. The gene that codes for this molecule is composed of a polymorphic tandem repeat of 60 nucleotides. Our objective was to determine if MUC-1 genetic polymorphism is associated with implantation failure in patients with a history of recurrent abortion. The study was conducted on 10 women aged 25 to 35 years with no history of successful pregnancy and with a diagnosis of infertility. The control group consisted of 32 patients aged 25 to 35 years who had delivered at least two full-term live children and who had no history of abortions or fetal losses. MUC-1 amplicons were obtained by PCR and observed on agarose and polyacrylamide gel after electrophoresis. Statistical analysis showed no significant difference in the number of MUC-1 variable number of tandem repeats between these groups (P > 0.05). Our results suggest that there is no effect of the polymorphic MUC-1 sequence on the implantation failure. However, the data do not exclude MUC-1 relevance during embryo implantation. The process is related to several associated factors such as the mechanisms of gene expression in the uterus, specific MUC-1 post-translational modifications and appropriate interactions with other molecules during embryo implantation.
Resumo:
Imagine the potential implications of an organization whose business and IT processes are well aligned and are capable of reactively and proactively responding to the external and internal changes. The Philips IT Infrastructure and Operations department (I&O) is undergoing a series of transformation activities to help Philips business keeping up with the changes. I&O would serve a critical function in any business sectors; given that the I&O’s strategy switched from “design, build and run” to “specify, acquire and performance manage”, that function is amplified. In 2013, I&O’s biggest transforming programme I&O Futures engaged multiple interdisciplinary departments and programs on decommissioning legacy processes and restructuring new processes with respect to the Information Technology Internet Library (ITIL), helping I&O to achieve a common infrastructure and operating platform (CI&OP). The author joined I&O Futures in the early 2014 and contributed to the CI&OP release 1, during which a designed model Bing Box and its evaluations were conducted through the lens of six sigma’s structured define-measure-analyze-improve-control (DMAIC) improvement approach. This Bing Box model was intended to firstly combine business and IT principles, namely Lean IT, Agile, ITIL best practices, and Aspect-oriented programming (AOP) into a framework. Secondly, the author implemented the modularized optimization cycles according to the defined framework into Philips’ ITIL-based processes and, subsequently, to enhance business process performance as well as to increase efficiency of the optimization cycles. The unique of this thesis is that the Bing Box model not only provided comprehensive optimization approaches and principles for business process performance, but also integrated and standardized optimization modules for the optimization process itself. The research followed a design research guideline that seek to extend the boundaries of human and organizational capabilities by creating new and innovative artifacts. The Chapter 2 firstly reviewed the current research on Lean Six Sigma, Agile, AOP and ITIL, aiming at identifying the broad conceptual bases for this study. In Chapter 3, we included the process of constructing the Bing Box model. The Chapter 4 described the adoption of Bing Box model: two-implementation case validated by stakeholders through observations and interviews. Chapter 5 contained the concluding remarks, the limitation of this research work and the future research areas. Chapter 6 provided the references used in this thesis.
Resumo:
In this study, a neuro-fuzzy estimator was developed for the estimation of biomass concentration of the microalgae Synechococcus nidulans from initial batch concentrations, aiming to predict daily productivity. Nine replica experiments were performed. The growth was monitored daily through the culture medium optic density and kept constant up to the end of the exponential phase. The network training followed a full 3³ factorial design, in which the factors were the number of days in the entry vector (3,5 and 7 days), number of clusters (10, 30 and 50 clusters) and internal weight softening parameter (Sigma) (0.30, 0.45 and 0.60). These factors were confronted with the sum of the quadratic error in the validations. The validations had 24 (A) and 18 (B) days of culture growth. The validations demonstrated that in long-term experiments (Validation A) the use of a few clusters and high Sigma is necessary. However, in short-term experiments (Validation B), Sigma did not influence the result. The optimum point occurred within 3 days in the entry vector, 10 clusters and 0.60 Sigma and the mean determination coefficient was 0.95. The neuro-fuzzy estimator proved a credible alternative to predict the microalgae growth.