958 resultados para Structural Parameters
Resumo:
Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.
Resumo:
Twin studies offer the opportunity to determine the relative contribution of genes versus environment in traits of interest. Here, we investigate the extent to which variance in brain structure is reduced in monozygous twins with identical genetic make-up. We investigate whether using twins as compared to a control population reduces variability in a number of common magnetic resonance (MR) structural measures, and we investigate the location of areas under major genetic influences. This is fundamental to understanding the benefit of using twins in studies where structure is the phenotype of interest. Twenty-three pairs of healthy MZ twins were compared to matched control pairs. Volume, T2 and diffusion MR imaging were performed as well as spectroscopy (MRS). Images were compared using (i) global measures of standard deviation and effect size, (ii) voxel-based analysis of similarity and (iii) intra-pair correlation. Global measures indicated a consistent increase in structural similarity in twins. The voxel-based and correlation analyses indicated a widespread pattern of increased similarity in twin pairs, particularly in frontal and temporal regions. The areas of increased similarity were most widespread for the diffusion trace and least widespread for T2. MRS showed consistent reduction in metabolite variation that was significant in the temporal lobe N-acetylaspartate (NAA). This study has shown the distribution and magnitude of reduced variability in brain volume, diffusion, T2 and metabolites in twins. The data suggest that evaluation of twins discordant for disease is indeed a valid way to attribute genetic or environmental influences to observed abnormalities in patients since evidence is provided for the underlying assumption of decreased variability in twins.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
We present a formalism for the analysis of sensitivity of nuclear magnetic resonance pulse sequences to variations of pulse sequence parameters, such as radiofrequency pulses, gradient pulses or evolution delays. The formalism enables the calculation of compact, analytic expressions for the derivatives of the density matrix and the observed signal with respect to the parameters varied. The analysis is based on two constructs computed in the course of modified density-matrix simulations: the error interrogation operators and error commutators. The approach presented is consequently named the Error Commutator Formalism (ECF). It is used to evaluate the sensitivity of the density matrix to parameter variation based on the simulations carried out for the ideal parameters, obviating the need for finite-difference calculations of signal errors. The ECF analysis therefore carries a computational cost comparable to a single density-matrix or product-operator simulation. Its application is illustrated using a number of examples from basic NMR spectroscopy. We show that the strength of the ECF is its ability to provide analytic insights into the propagation of errors through pulse sequences and the behaviour of signal errors under phase cycling. Furthermore, the approach is algorithmic and easily amenable to implementation in the form of a programming code. It is envisaged that it could be incorporated into standard NMR product-operator simulation packages.
Resumo:
Acoustic emission (AE) is the phenomenon where stress waves are generated due to rapid release of energy within a material caused by sources such as crack initiation or growth. AE technique involves recording the stress waves by means of sensors and subsequent analysis of the recorded signals to gather information about the nature of the source. Though AE technique is one of the popular non destructive evaluation (NDE) techniques for structural health monitoring of mechanical, aerospace and civil structures; several challenges still exist in successful application of this technique. Presence of spurious noise signals can mask genuine damage‐related AE signals; hence a major challenge identified is finding ways to discriminate signals from different sources. Analysis of parameters of recorded AE signals, comparison of amplitudes of AE wave modes and investigation of uniqueness of recorded AE signals have been mentioned as possible criteria for source differentiation. This paper reviews common approaches currently in use for source discrimination, particularly focusing on structural health monitoring of civil engineering structural components such as beams; and further investigates the applications of some of these methods by analyzing AE data from laboratory tests.
Resumo:
Rates of dehydration/rehydration are important quality parameters for dried products. Theoretically, if there are no adverse effects on the integrity of the tissue structure, it should absorb water to the same moisture content of the initial product before drying.The purpose of this work is to semi-automate the process of detection of cell structure boundaries as a food is dehydrated and rehydrated. This will enable food materials researchers to quantify changes to material’s structure as these processes take place. Images of potato cells as they were dehydrated and rehydrated were taken using an electron microscope. Cell boundaries were detected using an image processing algorithm. Average cell area and perimeter at each stage of dehydration were calculated and plotted versus time. The results show that the algorithm can successfully identify cell boundaries.
A particle-based micromechanics approach to simulate structural changes of plant cells during drying
Resumo:
This paper is concerned with applying a particle-based approach to simulate the micro-level cellular structural changes of plant cells during drying. The objective of the investigation was to relate the micro-level structural properties such as cell area, diameter and perimeter to the change of moisture content of the cell. Model assumes a simplified cell which consists of two basic components, cell wall and cell fluid. The cell fluid is assumed to be a Newtonian fluid with higher viscosity compared to water and cell wall is assumed to be a visco-elastic solid boundary located around the cell fluid. Cell fluid is modelled with Smoothed Particle Hydrodynamics (SPH) technique and for the cell wall; a Discrete Element Method (DEM) is used. The developed model is two-dimensional, but accounts for three-dimensional physical properties of real plant cells. Drying phenomena is simulated as fluid mass reductions and the model is used to predict the above mentioned structural properties as a function of cell fluid mass. Model predictions are found to be in fairly good agreement with experimental data in literature and the particle-based approach is demonstrated to be suitable for numerical studies of drying related structural deformations. Also a sensitivity analysis is included to demonstrate the influence of key model parameters to model predictions.
Resumo:
The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.
Resumo:
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.
Resumo:
The deposition of small metal clusters (Cu, Au and Al) on f.c.c. metals (Cu, Au and Ni) has been studied by molecular dynamics simulation using Finnis–Sinclair (FS) potential. The impact energy varied from 0.01 to 10 eV/atom. First, the deposition of single cluster was simulated. We observed that, even at much lower energy, a small cluster with (Ih) icosahedral symmetry was reconstructed to match the substrate structure (f.c.c.) after deposition. Next, clusters were modeled to drop, one after the other, on the surface. The nanostructure was found by soft landing of Au clusters on Cu with increasing coverage, where interfacial energy dominates. While at relatively higher deposition energy (a few eV), the ordered f.c.c.-like structure was observed in the first adlayer of the film formed by Al clusters depositing on Ni substrate. This characteristic is mainly attributive to the ballistic collision. Our results indicate that the surface morphology synthesized by cluster deposition could be controlled by experimental parameters, which will be helpful for controlled design of nanostructure.
Resumo:
Background Commercially available instrumented treadmill systems that provide continuous measures of temporospatial gait parameters have recently become available for clinical gait analysis. This study evaluated the level of agreement between temporospatial gait parameters derived from a new instrumented treadmill, which incorporated a capacitance-based pressure array, with those measured by a conventional instrumented walkway (criterion standard). Methods Temporospatial gait parameters were estimated from 39 healthy adults while walking over an instrumented walkway (GAITRite®) and instrumented treadmill system (Zebris) at matched speed. Differences in temporospatial parameters derived from the two systems were evaluated using repeated measures ANOVA models. Pearson-product-moment correlations were used to investigate relationships between variables measured by each system. Agreement was assessed by calculating the bias and 95% limits of agreement. Results All temporospatial parameters measured via the instrumented walkway were significantly different from those obtained from the instrumented treadmill (P < .01). Temporospatial parameters derived from the two systems were highly correlated (r, 0.79–0.95). The 95% limits of agreement for temporal parameters were typically less than ±2% of gait cycle duration. However, 95% limits of agreement for spatial measures were as much as ±5 cm. Conclusions Differences in temporospatial parameters between systems were small but statistically significant and of similar magnitude to changes reported between shod and unshod gait in healthy young adults. Temporospatial parameters derived from an instrumented treadmill, therefore, are not representative of those obtained from an instrumented walkway and should not be interpreted with reference to literature on overground walking.
Resumo:
Carrying capacity assessments model a population’s potential self-sufficiency. A crucial first step in the development of such modelling is to examine the basic resource-based parameters defining the population’s production and consumption habits. These parameters include basic human needs such as food, water, shelter and energy together with climatic, environmental and behavioural characteristics. Each of these parameters imparts land-usage requirements in different ways and varied degrees so their incorporation into carrying capacity modelling also differs. Given that the availability and values of production parameters may differ between locations, no two carrying capacity models are likely to be exactly alike. However, the essential parameters themselves can remain consistent so one example, the Carrying Capacity Dashboard, is offered as a case study to highlight one way in which these parameters are utilised. While examples exist of findings made from carrying capacity assessment modelling, to date, guidelines for replication of such studies in other regions and scales have largely been overlooked. This paper addresses such shortcomings by describing a process for the inclusion and calibration of the most important resource-based parameters in a way that could be repeated elsewhere.
Resumo:
Commodity price modeling is normally approached in terms of structural time-series models, in which the different components (states) have a financial interpretation. The parameters of these models can be estimated using maximum likelihood. This approach results in a non-linear parameter estimation problem and thus a key issue is how to obtain reliable initial estimates. In this paper, we focus on the initial parameter estimation problem for the Schwartz-Smith two-factor model commonly used in asset valuation. We propose the use of a two-step method. The first step considers a univariate model based only on the spot price and uses a transfer function model to obtain initial estimates of the fundamental parameters. The second step uses the estimates obtained in the first step to initialize a re-parameterized state-space-innovations based estimator, which includes information related to future prices. The second step refines the estimates obtained in the first step and also gives estimates of the remaining parameters in the model. This paper is part tutorial in nature and gives an introduction to aspects of commodity price modeling and the associated parameter estimation problem.
Resumo:
This paper presents a comprehensive numerical procedure to treat the blast response of laminated glass (LG) panels and studies the influence of important material parameters. Post-crack behaviour of the LG panel and the contribution of the interlayer towards blast resistance are treated. Modelling techniques are validated by comparing with existing experimental results. Findings indicate that the tensile strength of glass considerably influences the blast response of LG panels while the interlayer material properties have a major impact on the response under higher blast loads. Initially, glass panes absorb most of the blast energy, but after the glass breaks, interlayer deforms further and absorbs most of the blast energy. LG panels should be designed to fail by tearing of the interlayer rather than failure at the supports to achieve a desired level of protection. From this aspect, material properties of glass, interlayer and sealant joints play important roles, but unfortunately they are not accounted for in the current design standards. The new information generated in this paper will enhance the capabilities of engineers to better design LG panels under blast loads and use better materials to improve the blast response of LG panels.
Resumo:
Large arrays and networks of carbon nanotubes, both single- and multi-walled, feature many superior properties which offer excellent opportunities for various modern applications ranging from nanoelectronics, supercapacitors, photovoltaic cells, energy storage and conversation devices, to gas- and biosensors, nanomechanical and biomedical devices etc. At present, arrays and networks of carbon nanotubes are mainly fabricated from the pre-fabricated separated nanotubes by solution-based techniques. However, the intrinsic structure of the nanotubes (mainly, the level of the structural defects) which are required for the best performance in the nanotube-based applications, are often damaged during the array/network fabrication by surfactants, chemicals, and sonication involved in the process. As a result, the performance of the functional devices may be significantly degraded. In contrast, directly synthesized nanotube arrays/networks can preclude the adverse effects of the solution-based process and largely preserve the excellent properties of the pristine nanotubes. Owing to its advantages of scale-up production and precise positioning of the grown nanotubes, catalytic and catalyst-free chemical vapor depositions (CVD), as well as plasma-enhanced chemical vapor deposition (PECVD) are the methods most promising for the direct synthesis of the nanotubes.