897 resultados para estimation of dynamic structural models
Resumo:
Despite the great importance of soybeans in Brazil, there have been few applications of soybean crop modeling on Brazilian conditions. Thus, the objective of this study was to use modified crop models to estimate the depleted and potential soybean crop yield in Brazil. The climatic variable data used in the modified simulation of the soybean crop models were temperature, insolation and rainfall. The data set was taken from 33 counties (28 Sao Paulo state counties, and 5 counties from other states that neighbor São Paulo). Among the models, modifications in the estimation of the leaf area of the soybean crop, which includes corrections for the temperature, shading, senescence, CO2, and biomass partition were proposed; also, the methods of input for the model's simulation of the climatic variables were reconsidered. The depleted yields were estimated through a water balance, from which the depletion coefficient was estimated. It can be concluded that the adaptation soybean growth crop model might be used to predict the results of the depleted and potential yield of soybeans, and it can also be used to indicate better locations and periods of tillage.
Resumo:
The Schroeder's backward integration method is the most used method to extract the decay curve of an acoustic impulse response and to calculate the reverberation time from this curve. In the literature the limits and the possible improvements of this method are widely discussed. In this work a new method is proposed for the evaluation of the energy decay curve. The new method has been implemented in a Matlab toolbox. Its performance has been tested versus the most accredited literature method. The values of EDT and reverberation time extracted from the energy decay curves calculated with both methods have been compared in terms of the values themselves and in terms of their statistical representativeness. The main case study consists of nine Italian historical theatres in which acoustical measurements were performed. The comparison of the two extraction methods has also been applied to a critical case, i.e. the structural impulse responses of some building elements. The comparison underlines that both methods return a comparable value of the T30. Decreasing the range of evaluation, they reveal increasing differences; in particular, the main differences are in the first part of the decay, where the EDT is evaluated. This is a consequence of the fact that the new method returns a “locally" defined energy decay curve, whereas the Schroeder's method accumulates energy from the tail to the beginning of the impulse response. Another characteristic of the new method for the energy decay extraction curve is its independence on the background noise estimation. Finally, a statistical analysis is performed on the T30 and EDT values calculated from the impulse responses measurements in the Italian historical theatres. The aim of this evaluation is to know whether a subset of measurements could be considered representative for a complete characterization of these opera houses.
Resumo:
The relatively young discipline of astronautics represents one of the scientifically most fascinating and technologically advanced achievements of our time. The human exploration in space does not offer only extraordinary research possibilities but also demands high requirements from man and technology. The space environment provides a lot of attractive experimental tools towards the understanding of fundamental mechanism in natural sciences. It has been shown that especially reduced gravity and elevated radiation, two distinctive factors in space, influence the behavior of biological systems significantly. For this reason one of the key objectives on board of an earth orbiting laboratory is the research in the field of life sciences, covering the broad range from botany, human physiology and crew health up to biotechnology. The Columbus Module is the only European low gravity platform that allows researchers to perform ambitious experiments in a continuous time frame up to several months. Biolab is part of the initial outfitting of the Columbus Laboratory; it is a multi-user facility supporting research in the field of biology, e.g. effect of microgravity and space radiation on cell cultures, micro-organisms, small plants and small invertebrates. The Biolab IEC are projects designed to work in the automatic part of Biolab. In this moment in the TO-53 department of Airbus Defence & Space (formerly Astrium) there are two experiments that are in phase C/D of the development and they are the subject of this thesis: CELLRAD and CYTOSKELETON. They will be launched in soft configuration, that means packed inside a block of foam that has the task to reduce the launch loads on the payload. Until 10 years ago the payloads which were launched in soft configuration were supposed to be structural safe by themselves and a specific structural analysis could be waived on them; with the opening of the launchers market to private companies (that are not under the direct control of the international space agencies), the requirements on the verifications of payloads are changed and they have become much more conservative. In 2012 a new random environment has been introduced due to the new Space-X launch specification that results to be particularly challenging for the soft launched payloads. The last ESA specification requires to perform structural analysis on the payload for combined loads (random vibration, quasi-steady acceleration and pressure). The aim of this thesis is to create FEM models able to reproduce the launch configuration and to verify that all the margins of safety are positive and to show how they change because of the new Space-X random environment. In case the results are negative, improved design solution are implemented. Based on the FEM result a study of the joins has been carried out and, when needed, a crack growth analysis has been performed.
Resumo:
The research field of my PhD concerns mathematical modeling and numerical simulation, applied to the cardiac electrophysiology analysis at a single cell level. This is possible thanks to the development of mathematical descriptions of single cellular components, ionic channels, pumps, exchangers and subcellular compartments. Due to the difficulties of vivo experiments on human cells, most of the measurements are acquired in vitro using animal models (e.g. guinea pig, dog, rabbit). Moreover, to study the cardiac action potential and all its features, it is necessary to acquire more specific knowledge about single ionic currents that contribute to the cardiac activity. Electrophysiological models of the heart have become very accurate in recent years giving rise to extremely complicated systems of differential equations. Although describing the behavior of cardiac cells quite well, the models are computationally demanding for numerical simulations and are very difficult to analyze from a mathematical (dynamical-systems) viewpoint. Simplified mathematical models that capture the underlying dynamics to a certain extent are therefore frequently used. The results presented in this thesis have confirmed that a close integration of computational modeling and experimental recordings in real myocytes, as performed by dynamic clamp, is a useful tool in enhancing our understanding of various components of normal cardiac electrophysiology, but also arrhythmogenic mechanisms in a pathological condition, especially when fully integrated with experimental data.
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
Amplifications and deletions of chromosomal DNA, as well as copy-neutral loss of heterozygosity have been associated with diseases processes. High-throughput single nucleotide polymorphism (SNP) arrays are useful for making genome-wide estimates of copy number and genotype calls. Because neighboring SNPs in high throughput SNP arrays are likely to have dependent copy number and genotype due to the underlying haplotype structure and linkage disequilibrium, hidden Markov models (HMM) may be useful for improving genotype calls and copy number estimates that do not incorporate information from nearby SNPs. We improve previous approaches that utilize a HMM framework for inference in high throughput SNP arrays by integrating copy number, genotype calls, and the corresponding confidence scores when available. Using simulated data, we demonstrate how confidence scores control smoothing in a probabilistic framework. Software for fitting HMMs to SNP array data is available in the R package ICE.
Resumo:
This paper considers statistical models in which two different types of events, such as the diagnosis of a disease and the remission of the disease, occur alternately over time and are observed subject to right censoring. We propose nonparametric estimators for the joint distribution of bivariate recurrence times and the marginal distribution of the first recurrence time. In general, the marginal distribution of the second recurrence time cannot be estimated due to an identifiability problem, but a conditional distribution of the second recurrence time can be estimated non-parametrically. In literature, statistical methods have been developed to estimate the joint distribution of bivariate recurrence times based on data of the first pair of censored bivariate recurrence times. These methods are efficient in the current model because recurrence times of higher orders are not used. Asymptotic properties of the estimators are established. Numerical studies demonstrate the estimator performs well with practical sample sizes. We apply the proposed method to a Denmark psychiatric case register data set for illustration of the methods and theory.
Resumo:
This article presents a feasibility study with the objective of investigating the potential of multi-detector computed tomography (MDCT) to estimate the bone age and sex of deceased persons. To obtain virtual skeletons, the bodies of 22 deceased persons with known age at death were scanned by MDCT using a special protocol that consisted of high-resolution imaging of the skull, shoulder girdle (including the upper half of the humeri), the symphysis pubis and the upper halves of the femora. Bone and soft-tissue reconstructions were performed in two and three dimensions. The resulting data were investigated by three anthropologists with different professional experience. Sex was determined by investigating three-dimensional models of the skull and pelvis. As a basic orientation for the age estimation, the complex method according to Nemeskéri and co-workers was applied. The final estimation was effected using additional parameters like the state of dentition, degeneration of the spine, etc., which where chosen individually by the three observers according to their experience. The results of the study show that the estimation of sex and age is possible by the use of MDCT. Virtual skeletons present an ideal collection for anthropological studies, because they are obtained in a non-invasive way and can be investigated ad infinitum.
Resumo:
The flammability zone boundaries are very important properties to prevent explosions in the process industries. Within the boundaries, a flame or explosion can occur so it is important to understand these boundaries to prevent fires and explosions. Very little work has been reported in the literature to model the flammability zone boundaries. Two boundaries are defined and studied: the upper flammability zone boundary and the lower flammability zone boundary. Three methods are presented to predict the upper and lower flammability zone boundaries: The linear model The extended linear model, and An empirical model The linear model is a thermodynamic model that uses the upper flammability limit (UFL) and lower flammability limit (LFL) to calculate two adiabatic flame temperatures. When the proper assumptions are applied, the linear model can be reduced to the well-known equation yLOC = zyLFL for estimation of the limiting oxygen concentration. The extended linear model attempts to account for the changes in the reactions along the UFL boundary. Finally, the empirical method fits the boundaries with linear equations between the UFL or LFL and the intercept with the oxygen axis. xx Comparison of the models to experimental data of the flammability zone shows that the best model for estimating the flammability zone boundaries is the empirical method. It is shown that is fits the limiting oxygen concentration (LOC), upper oxygen limit (UOL), and the lower oxygen limit (LOL) quite well. The regression coefficient values for the fits to the LOC, UOL, and LOL are 0.672, 0.968, and 0.959, respectively. This is better than the fit of the "zyLFL" method for the LOC in which the regression coefficient’s value is 0.416.
Resumo:
The degree of polarization of a refected field from active laser illumination can be used for object identifcation and classifcation. The goal of this study is to investigate methods for estimating the degree of polarization for refected fields with active laser illumination, which involves the measurement and processing of two orthogonal field components (complex amplitudes), two orthogonal intensity components, and the total field intensity. We propose to replace interferometric optical apparatuses with a computational approach for estimating the degree of polarization from two orthogonal intensity data and total intensity data. Cramer-Rao bounds for each of the three sensing modalities with various noise models are computed. Algebraic estimators and maximum-likelihood (ML) estimators are proposed. Active-set algorithm and expectation-maximization (EM) algorithm are used to compute ML estimates. The performances of the estimators are compared with each other and with their corresponding Cramer-Rao bounds. Estimators for four-channel polarimeter (intensity interferometer) sensing have a better performance than orthogonal intensities estimators and total intensity estimators. Processing the four intensities data from polarimeter, however, requires complicated optical devices, alignment, and four CCD detectors. It only requires one or two detectors and a computer to process orthogonal intensities data and total intensity data, and the bounds and estimator performances demonstrate that reasonable estimates may still be obtained from orthogonal intensities or total intensity data. Computational sensing is a promising way to estimate the degree of polarization.
Resumo:
The scaphoid is the most frequently fractured carpal bone. When investigating fixation stability, which may influence healing, knowledge of forces and moments acting on the scaphoid is essential. The aim of this study was to evaluate cartilage contact forces acting on the intact scaphoid in various functional wrist positions using finite element modeling. A novel methodology was utilized as an attempt to overcome some limitations of earlier studies, namely, relatively coarse imaging resolution to assess geometry, assumption of idealized cartilage thicknesses and neglected cartilage pre-stresses in the unloaded joint. Carpal bone positions and articular cartilage geometry were obtained independently by means of high resolution CT imaging and incorporated into finite element (FE) models of the human wrist in eight functional positions. Displacement driven FE analyses were used to resolve inter-penetration of cartilage layers, and provided contact areas, forces and pressure distribution for the scaphoid bone. The results were in the range reported by previous studies. Novel findings of this study were: (i) cartilage thickness was found to be heterogeneous for each bone and vary considerably between carpal bones; (ii) this heterogeneity largely influenced the FE results and (iii) the forces acting on the scaphoid in the unloaded wrist were found to be significant. As major limitations, accuracy of the method was found to be relatively low, and the results could not be compared to independent experiments. The obtained results will be used in a following study to evaluate existing and recently developed screws used to fix scaphoid fractures.
Resumo:
Recent attempts to detect mutations involving single base changes or small deletions that are specific to genetic diseases provide an opportunity to develop a two-tier mutation-screening program through which incidence of rare genetic disorders and gene carriers may be precisely estimated. A two-tier survey consists of mutation screening in a sample of patients with specific genetic disorders and in a second sample of newborns from the same population in which mutation frequency is evaluated. We provide the statistical basis for evaluating the incidence of affected and gene carriers in such two-tier mutation-screening surveys, from which the precision of the estimates is derived. Sample-size requirements of such two-tier mutation-screening surveys are evaluated. Considering examples of cystic fibrosis (CF) and medium-chain acyl-CoA dehydrogenase deficiency (MCAD), the two most frequent autosomal recessive disease in Caucasian populations and the two most frequent mutations (delta F508 and G985) that occur on these disease allele-bearing chromosomes, we show that, with 50-100 patients and a 20-fold larger sample of newborns screened for these mutations, the incidence of such diseases and their gene carriers in a population may be quite reliably estimated. The theory developed here is also applicable to rare autosomal dominant diseases for which disease-specific mutations are found.