920 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis considers two basic aspects of impact damage in composite materials, namely damage severity discrimination and impact damage location by using Acoustic Emissions (AE) and Artificial Neural Networks (ANNs). The experimental work embodies a study of such factors as the application of AE as Non-destructive Damage Testing (NDT), and the evaluation of ANNs modelling. ANNs, however, played an important role in modelling implementation. In the first aspect of the study, different impact energies were used to produce different level of damage in two composite materials (T300/914 and T800/5245). The impacts were detected by their acoustic emissions (AE). The AE waveform signals were analysed and modelled using a Back Propagation (BP) neural network model. The Mean Square Error (MSE) from the output was then used as a damage indicator in the damage severity discrimination study. To evaluate the ANN model, a comparison was made of the correlation coefficients of different parameters, such as MSE, AE energy, AE counts, etc. MSE produced an outstanding result based on the best performance of correlation. In the second aspect, a new artificial neural network model was developed to provide impact damage location on a quasi-isotropic composite panel. It was successfully trained to locate impact sites by correlating the relationship between arriving time differences of AE signals at transducers located on the panel and the impact site coordinates. The performance of the ANN model, which was evaluated by calculating the distance deviation between model output and real location coordinates, supports the application of ANN as an impact damage location identifier. In the study, the accuracy of location prediction decreased when approaching the central area of the panel. Further investigation indicated that this is due to the small arrival time differences, which defect the performance of ANN prediction. This research suggested increasing the number of processing neurons in the ANNs as a practical solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss how discriminative training can be applied to the hidden vector state (HVS) model in different task domains. The HVS model is a discrete hidden Markov model (HMM) in which each HMM state represents the state of a push-down automaton with a finite stack size. In previous applications, maximum-likelihood estimation (MLE) is used to derive the parameters of the HVS model. However, MLE makes a number of assumptions and unfortunately some of these assumptions do not hold. Discriminative training, without making such assumptions, can improve the performance of the HVS model by discriminating the correct hypothesis from the competing hypotheses. Experiments have been conducted in two domains: the travel domain for the semantic parsing task using the DARPA Communicator data and the Air Travel Information Services (ATIS) data and the bioinformatics domain for the information extraction task using the GENIA corpus. The results demonstrate modest improvements of the performance of the HVS model using discriminative training. In the travel domain, discriminative training of the HVS model gives a relative error reduction rate of 31 percent in F-measure when compared with MLE on the DARPA Communicator data and 9 percent on the ATIS data. In the bioinformatics domain, a relative error reduction rate of 4 percent in F-measure is achieved on the GENIA corpus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this talk we investigate the usage of spectrally shaped amplified spontaneous emission (ASE) in order to emulate highly dispersed wavelength division multiplexed (WDM) signals in an optical transmission system. Such a technique offers various simplifications to large scale WDM experiments. Not only does it offer a reduction in transmitter complexity, removing the need for multiple source lasers, it potentially reduces the test and measurement complexity by requiring only the centre channel of a WDM system to be measured in order to estimate WDM worst case performance. The use of ASE as a test and measurement tool is well established in optical communication systems and several measurement techniques will be discussed [1, 2]. One of the most prevalent uses of ASE is in the measurement of receiver sensitivity where ASE is introduced in order to degrade the optical signal to noise ratio (OSNR) and measure the resulting bit error rate (BER) at the receiver. From an analytical point of view noise has been used to emulate system performance, the Gaussian Noise model is used as an estimate of highly dispersed signals and has had consider- able interest [3]. The work to be presented here extends the use of ASE by using it as a metric to emulate highly dispersed WDM signals and in the process reduce WDM transmitter complexity and receiver measurement time in a lab environment. Results thus far have indicated [2] that such a transmitter configuration is consistent with an AWGN model for transmission, with modulation format complexity and nonlinearities playing a key role in estimating the performance of systems utilising the ASE channel emulation technique. We conclude this work by investigating techniques capable of characterising the nonlinear and damage limits of optical fibres and the resultant information capacity limits. REFERENCES McCarthy, M. E., N. Mac Suibhne, S. T. Le, P. Harper, and A. D. Ellis, “High spectral efficiency transmission emulation for non-linear transmission performance estimation for high order modulation formats," 2014 European Conference on IEEE Optical Communication (ECOC), 2014. 2. Ellis, A., N. Mac Suibhne, F. Gunning, and S. Sygletos, “Expressions for the nonlinear trans- mission performance of multi-mode optical fiber," Opt. Express, Vol. 21, 22834{22846, 2013. Vacondio, F., O. Rival, C. Simonneau, E. Grellier, A. Bononi, L. Lorcy, J. Antona, and S. Bigo, “On nonlinear distortions of highly dispersive optical coherent systems," Opt. Express, Vol. 20, 1022-1032, 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ultrasonic P wavc transmission seismograms recorded on sediment cores have been analyzed to study the acoustic and estimate the clastic properties of marine sediments from different provinces dominated by terrigenous, calcareous, amI diatomaceous sedimentation. Instantaneous frequencies computed from the transmission seismograms are displayed as gray-shaded images to give an acoustic overview of the lithology of each core. Ccntirneter-scale variations in the ultrasonic waveforms associated with lithological changes are illustrated by wiggle traces in detail. Cross-correlation, multiple-filter, and spectral ratio techniques are applied to derive P wave velocities and attenuation coefficients. S wave velocities and attenuation coefficients, elastic moduli, and permeabilities are calculated by an inversion scheme based on the Biot-Stoll viscoelastic model. Together wilh porosity measurements, P and S wave scatter diagrams are constructed to characterize different sediment types by their velocity- and attenuation-porosity relationships. They demonstrate that terrigenous, calcareous, and diatomaceous sediments cover different velocity- and attenuation-porosity ranges. In terrigcnous sediments, P wave vclocities and attenuation coefficients decrease rapidly with increasing porosity, whereas S wave velocities and shear moduli are very low. Calcareous sediments behave similarly at relatively higher porosities. Foraminifera skeletons in compositions of terrigenous mud and calcareous ooze cause a stiffening of the frame accompanied by higher shear moduli, P wave velocities, and attenuation coefficients. In diatomaceous ooze the contribution of the shear modulus becomes increasingly important and is controlled by the opal content, whereas attenuation is very low. This leads to the opportunity to predict the opal content from nondestructive P wave velocity measurements at centimeter-scale resolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soil erosion by water is a major driven force causing land degradation. Laboratory experiments, on-site field study, and suspended sediments measurements were major fundamental approaches to study the mechanisms of soil water erosion and to quantify the erosive losses during rain events. The experimental research faces the challenge to extent the result to a wider spatial scale. Soil water erosion modeling provides possible solutions for scaling problems in erosion research, and is of principal importance to better understanding the governing processes of water erosion. However, soil water erosion models were considered to have limited value in practice. Uncertainties in hydrological simulations are among the reasons that hindering the development of water erosion model. Hydrological models gained substantial improvement recently and several water erosion models took advantages of the improvement of hydrological models. It is crucial to know the impact of changes in hydrological processes modeling on soil erosion simulation.

This dissertation work first created an erosion modeling tool (GEOtopSed) that takes advantage of the comprehensive hydrological model (GEOtop). The newly created tool was then tested and evaluated at an experimental watershed. The GEOtopSed model showed its ability to estimate multi-year soil erosion rate with varied hydrological conditions. To investigate the impact of different hydrological representations on soil erosion simulation, a 11-year simulation experiment was conducted for six models with varied configurations. The results were compared at varied temporal and spatial scales to highlight the roles of hydrological feedbacks on erosion. Models with simplified hydrological representations showed agreement with GEOtopSed model on long temporal scale (longer than annual). This result led to an investigation for erosion simulation at different rainfall regimes to check whether models with different hydrological representations have agreement on the soil water erosion responses to the changing climate. Multi-year ensemble simulations with different extreme precipitation scenarios were conducted at seven climate regions. The differences in erosion simulation results showed the influences of hydrological feedbacks which cannot be seen by purely rainfall erosivity method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have analyzed the stable carbon isotopic composition of the diunsaturated C37 alkenone in 29 surface sediments from the equatorial and South Atlantic Ocean. Our study area covers different oceanographic settings, including sediments from the major upwelling regions off South Africa, the equatorial upwelling, and the oligotrophic western South Atlantic. In order to examine the environmental influences on the sedimentary record the alkenone-based carbon isotopic fractionation (Ep) values were correlated with the overlying surface water concentrations of aqueous CO2 ([CO2(aq)]), phosphate, and nitrate. We found Ep positively correlated with 1/[CO2(aq)] and negatively correlated with [PO43-] and [NO3-]. However, the relationship between Ep and 1/[CO2(aq)] is opposite of what is expected from a [CO2(aq)] controlled, diffusive uptake model. Instead, our findings support the theory of Bidigare et al. (1997, doi:10.1029/96GB03939) that the isotopic fractionation in haptophytes is related to nutrient-limited growth rates. The relatively high variability of the Ep-[PO4] relationship in regions with low surface water nutrient concentrations indicates that here other environmental factors also affect the isotopic signal. These factors might be variations in other growth-limiting resources such as light intensity or micronutrient concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este documento se estima una medida de la incertidumbre inflacionaria. Un modelo de inflación señala incertidumbre cuando los errores de pronóstico son heteroscedásticos. Por medio de la especificación de una ecuación GARCH (Generalized Autoregressive Conditional Heteroscedasticity), para la varianza del término de error de un modelo de inflación, es posible estimar una proxy de incertidumbre inflacionaria. La estimación simultánea del modelo de inflación y de la ecuación GARCH, produce un nuevo modelo de inflación en el cual los errores de pronóstico son homocedásticos. Existe consenso en la literatura económica en que hay una correlación positiva entre incertidumbre inflacionaria y la magnitud de la tasa de inflación, lo cual, como lo señaló Friedman (1977), representa uno de los costos asociados con la persistencia inflacionaria. Esto es porque tal incertidumbre dificulta la toma de decisiones óptimas por parte de los agentes económicos.La evidencia empírica, para el periodo 1954:01-2002:08, apoya la hipótesis de que para el caso de Costa Rica mientras mayor es la inflación mayor es la incertidumbre respecto a esta variable. En los últimos siete años (1997-2002) la incertidumbre presenta la variación media más baja de todo el periodo. Además, se identifica un efecto asimétrico de la inflación sobre la incertidumbre inflacionaria, es decir, la incertidumbre inflacionaria tiende a incrementarse más para el siguiente periodo cuando la inflación pronosticada está por debajo de la inflación actual, que cuando la inflación pronosticada está por arriba de la tasa observada de inflación. Estos resultados tienen una clara implicación para la política monetaria. Para minimizar la dificultad que la inflación causa en la toma óptima de decisiones de los agentes económicos es necesario perseguir no solamente un nivel bajo de inflación sino que también sea estable.AbstractThis paper estimates a measure of inflationary uncertainty. An inflation model signals uncertainty when the forecast errors are heteroskedastic. By the specification of a GARCH (Generalized Autoregressive Conditional Heteroscedasticity) equation, for the variance of the error term of the inflation model, it is possible to estimate a proxy for inflationary uncertainty. By the simultaneous estimation of the inflation model and the GARCH equation, a new inflation model is obtained in which the forecast errors are homocedastic. Most economists agree that there is a positive correlation between inflationary uncertainty and the magnitude of the inflation rate, which, as was pointed out by Friedman (1977), represents one of costs associated with the persistence of inflation. This is because such uncertainty clouds the decision-making process of consumers and investors.The empirical evidence for the period 1954:01-2002:08 confirms that in the case of Costa Rica inflationary uncertainty increases as inflation rises. In the last seven years(1997-2002) the uncertainty present the mean variation most small of the period. In addition, inflation has an asymmetric effect on inflationary uncertainty. That is, when the inflation forecast is below the actual inflation, inflationary uncertainty increases for the next period. The opposite happens when the inflation forecast is above the observed rate of inflation. Besides, the absolute value of the change on uncertainty is greater in the first case than the second. These results have a clear implication for monetary policy. To minimize the disruptions that inflation causes to the economic decision-making process, it is necessary to pursue, not only a low level of inflation, but a stable one as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this project we developed conductive thermoplastic resins by adding varying amounts of three different carbon fillers: carbon black (CB), synthetic graphite (SG) and multi–walled carbon nanotubes (CNT) to a polypropylene matrix for application as fuel cell bipolar plates. This component of fuel cells provides mechanical support to the stack, circulates the gases that participate in the electrochemical reaction within the fuel cell and allows for removal of the excess heat from the system. The materials fabricated in this work were tested to determine their mechanical and thermal properties. These materials were produced by adding varying amounts of single carbon fillers to a polypropylene matrix (2.5 to 15 wt.% Ketjenblack EC-600 JD carbon black, 10 to 80 wt.% Asbury Carbons’ Thermocarb TC-300 synthetic graphite, and 2.5 to 15 wt.% of Hyperion Catalysis International’s FIBRILTM multi-walled carbon nanotubes) In addition, composite materials containing combinations of these three fillers were produced. The thermal conductivity results showed an increase in both through–plane and in–plane thermal conductivities, with the largest increase observed for synthetic graphite. The Department of Energy (DOE) had previously set a thermal conductivity goal of 20 W/m·K, which was surpassed by formulations containing 75 wt.% and 80 wt.% SG, yielding in–plane thermal conductivity values of 24.4 W/m·K and 33.6 W/m·K, respectively. In addition, composites containing 2.5 wt.% CB, 65 wt.% SG, and 6 wt.% CNT in PP had an in–plane thermal conductivity of 37 W/m·K. Flexural and tensile tests were conducted. All composite formulations exceeded the flexural strength target of 25 MPa set by DOE. The tensile and flexural modulus of the composites increased with higher concentration of carbon fillers. Carbon black and synthetic graphite caused a decrease in the tensile and flexural strengths of the composites. However, carbon nanotubes increased the composite tensile and flexural strengths. Mathematical models were applied to estimate through–plane and in–plane thermal conductivities of single and multiple filler formulations, and tensile modulus of single–filler formulations. For thermal conductivity, Nielsen’s model yielded accurate thermal conductivity values when compared to experimental results obtained through the Flash method. For prediction of tensile modulus Nielsen’s model yielded the smallest error between the predicted and experimental values. The second part of this project consisted of the development of a curriculum in Fuel Cell and Hydrogen Technologies to address different educational barriers identified by the Department of Energy. By the creation of new courses and enterprise programs in the areas of fuel cells and the use of hydrogen as an energy carrier, we introduced engineering students to the new technologies, policies and challenges present with this alternative energy. Feedback provided by students participating in these courses and enterprise programs indicate positive acceptance of the different educational tools. Results obtained from a survey applied to students after participating in these courses showed an increase in the knowledge and awareness of energy fundamentals, which indicates the modules developed in this project are effective in introducing students to alternative energy sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we outline the sensing system used for the visual pose control of our experimental car-like vehicle, the autonomous tractor. The sensing system consists of a magnetic compass, an omnidirectional camera and a low-resolution odometry system. In this work, information from these sensors is fused using complementary filters. Complementary filters provide a means of fusing information from sensors with different characteristics in order to produce a more reliable estimate of the desired variable. Here, the range and bearing of landmarks observed by the vision system are fused with odometry information and a vehicle model, providing a more reliable estimate of these states. We also present a method of combining a compass sensor with odometry and a vehicle model to improve the heading estimate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we outline the sensing system used for the visual pose control of our experimental car-like vehicle, the Autonomous Tractor. The sensing system consists of a magnetic compass, an omnidirectional camera and a low-resolution odometry system. In this work, information from these sensors is fused using complementary filters. Complementary filters provide a means of fusing information from sensors with different characteristics in order to produce a more reliable estimate of the desired variable. Here, the range and bearing of landmarks observed by the vision system are fused with odometry information and a vehicle model, providing a more reliable estimate of these states. We also present a method of combining a compass sensor with odometry and a vehicle model to improve the heading estimate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maintenance activities in a large-scale engineering system are usually scheduled according to the lifetimes of various components in order to ensure the overall reliability of the system. Lifetimes of components can be deduced by the corresponding probability distributions with parameters estimated from past failure data. While failure data of the components is not always readily available, the engineers have to be content with the primitive information from the manufacturers only, such as the mean and standard deviation of lifetime, to plan for the maintenance activities. In this paper, the moment-based piecewise polynomial model (MPPM) are proposed to estimate the parameters of the reliability probability distribution of the products when only the mean and standard deviation of the product lifetime are known. This method employs a group of polynomial functions to estimate the two parameters of the Weibull Distribution according to the mathematical relationship between the shape parameter of two-parameters Weibull Distribution and the ratio of mean and standard deviation. Tests are carried out to evaluate the validity and accuracy of the proposed methods with discussions on its suitability of applications. The proposed method is particularly useful for reliability-critical systems, such as railway and power systems, in which the maintenance activities are scheduled according to the expected lifetimes of the system components.