882 resultados para Simulation and prediction
Resumo:
State-of-the-art process-based models have shown to be applicable to the simulation and prediction of coastal morphodynamics. On annual to decadal temporal scales, these models may show limitations in reproducing complex natural morphological evolution patterns, such as the movement of bars and tidal channels, e.g. the observed decadal migration of the Medem Channel in the Elbe Estuary, German Bight. Here a morphodynamic model is shown to simulate the hydrodynamics and sediment budgets of the domain to some extent, but fails to adequately reproduce the pronounced channel migration, due to the insufficient implementation of bank erosion processes. In order to allow for long-term simulations of the domain, a nudging method has been introduced to update the model-predicted bathymetries with observations. The model-predicted bathymetry is nudged towards true states in annual time steps. Sensitivity analysis of a user-defined correlation length scale, for the definition of the background error covariance matrix during the nudging procedure, suggests that the optimal error correlation length is similar to the grid cell size, here 80-90 m. Additionally, spatially heterogeneous correlation lengths produce more realistic channel depths than do spatially homogeneous correlation lengths. Consecutive application of the nudging method compensates for the (stand-alone) model prediction errors and corrects the channel migration pattern, with a Brier skill score of 0.78. The proposed nudging method in this study serves as an analytical approach to update model predictions towards a predefined 'true' state for the spatiotemporal interpolation of incomplete morphological data in long-term simulations.
Resumo:
A theoretical analysis of the three currently popular microscopic theories of solvation dynamics, namely, the dynamic mean spherical approximation (DMSA), the molecular hydrodynamic theory (MHT), and the memory function theory (MFT) is carried out. It is shown that in the underdamped limit of momentum relaxation, all three theories lead to nearly identical results when the translational motions of both the solute ion and the solvent molecules are neglected. In this limit, the theoretical prediction is in almost perfect agreement with the computer simulation results of solvation dynamics in the model Stockmayer liquid. However, the situation changes significantly in the presence of the translational motion of the solvent molecules. In this case, DMSA breaks down but the other two theories correctly predict the acceleration of solvation in agreement with the simulation results. We find that the translational motion of a light solute ion can play an important role in its own solvation. None of the existing theories describe this aspect. A generalization of the extended hydrodynamic theory is presented which, for the first time, includes the contribution of solute motion towards its own solvation dynamics. The extended theory gives excellent agreement with the simulations where solute motion is allowed. It is further shown that in the absence of translation, the memory function theory of Fried and Mukamel can be recovered from the hydrodynamic equations if the wave vector dependent dissipative kernel in the hydrodynamic description is replaced by its long wavelength value. We suggest a convenient memory kernel which is superior to the limiting forms used in earlier descriptions. We also present an alternate, quite general, statistical mechanical expression for the time dependent solvation energy of an ion. This expression has remarkable similarity with that for the translational dielectric friction on a moving ion.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
A new scalable Monotonically Integrated Large Eddy Simulation (MILES) method based on the Compact Accurately Boundary-Adjusting high-REsolution Technique (CABARET) has been applied for the simulation of unsteady flow around NACA0012 airfoil at Re = 400,000 and M = 0.058. The flow solution is coupled with the Ffowcs Williams-Hawkings formulation for far-field noise prediction. The computational modeling results are presented for several computational grid resolutions: 8, 16, and 32 million grid cells and compared with the experimental data available.
Resumo:
This paper introduces ART-EMAP, a neural architecture that uses spatial and temporal evidence accumulation to extend the capabilities of fuzzy ARTMAP. ART-EMAP combines supervised and unsupervised learning and a medium-term memory process to accomplish stable pattern category recognition in a noisy input environment. The ART-EMAP system features (i) distributed pattern registration at a view category field; (ii) a decision criterion for mapping between view and object categories which can delay categorization of ambiguous objects and trigger an evidence accumulation process when faced with a low confidence prediction; (iii) a process that accumulates evidence at a medium-term memory (MTM) field; and (iv) an unsupervised learning algorithm to fine-tune performance after a limited initial period of supervised network training. ART-EMAP dynamics are illustrated with a benchmark simulation example. Applications include 3-D object recognition from a series of ambiguous 2-D views.
Resumo:
Models and software products have been developed for modelling, simulation and prediction of different correlations in materials science, including 1. the correlation between processing parameters and properties in titanium alloys and ?-titanium aluminides; 2. time–temperature–transformation (TTT) diagrams for titanium alloys; 3. corrosion resistance of titanium alloys; 4. surface hardness and microhardness profile of nitrocarburised layers; 5. fatigue stress life (S–N) diagrams for Ti–6Al–4V alloys. The programs are based on trained artificial neural networks. For each particular case appropriate combination of inputs and outputs is chosen. Very good performances of the models are achieved. Graphical user interfaces (GUI) are created for easy use of the models. In addition interactive text versions are developed. The models designed are combined and integrated in software package that is built up on a modular fashion. The software products are available in versions for different platforms including Windows 95/98/2000/NT, UNIX and Apple Macintosh. Description of the software products is given, to demonstrate that they are convenient and powerful tools for practical applications in solving various problems in materials science. Examples for optimisation of the alloy compositions, processing parameters and working conditions are illustrated. An option for use of the software in materials selection procedure is described.
Resumo:
Knowledge on the life span of the riveting dies used in the automotive industry is sparse. It is often the case that only when faulty products are produced are workers aware that their tool needs to be changed. This is of course costly both in terms of time and money. Responding to this challenge, this paper proposes a methodology which integrates wear and stress analysis to quantify the life of a riveting die. Experiments are carried out to measure the applied load required to split a rivet. The obtained results (i.e. force curves) are used to validate the wear mechanisms of the die observed using scanning electron microscopy. Sliding, impact, and adhesive wears are observed on the riveting die after a certain number of riveting cycles. The stress distribution on the die during riveting is simulated using a finite element (FE) approach. In order to confirm the accuracy of the FE model, the experimental force results are compared with the ones produced from FE simulation. The maximum and minimum von Mises' stresses generated from the FE model are input into a Goodman diagram and an S-N curve to compute the life of the riveting die. It is found that the riveting die is predicted to run for 4 980 000 cycles before failure.
Resumo:
The objective of the Ph.D. thesis is to put the basis of an all-embracing link analysis procedure that may form a general reference scheme for the future state-of-the-art of RF/microwave link design: it is basically meant as a circuit-level simulation of an entire radio link, with – generally multiple – transmitting and receiving antennas examined by EM analysis. In this way the influence of mutual couplings on the frequency-dependent near-field and far-field performance of each element is fully accounted for. The set of transmitters is treated as a unique nonlinear system loaded by the multiport antenna, and is analyzed by nonlinear circuit techniques. In order to establish the connection between transmitters and receivers, the far-fields incident onto the receivers are evaluated by EM analysis and are combined by extending an available Ray Tracing technique to the link study. EM theory is used to describe the receiving array as a linear active multiport network. Link performances in terms of bit error rate (BER) are eventually verified a posteriori by a fast system-level algorithm. In order to validate the proposed approach, four heterogeneous application contexts are provided. A complete MIMO link design in a realistic propagation scenario is meant to constitute the reference case study. The second one regards the design, optimization and testing of various typologies of rectennas for power generation by common RF sources. Finally, the project and implementation of two typologies of radio identification tags, at X-band and V-band respectively. In all the cases the importance of an exhaustive nonlinear/electromagnetic co-simulation and co-design is demonstrated to be essential for any accurate system performance prediction.
Resumo:
A highly dangerous situations for tractor driver is the lateral rollover in operating conditions. Several accidents, involving tractor rollover, have indeed been encountered, requiring the design of a robust Roll-Over Protective Structure (ROPS). The aim of the thesis was to evaluate tractor behaviour in the rollover phase so as to calculate the energy absorbed by the ROPS to ensure driver safety. A Mathematical Model representing the behaviour of a generic tractor during a lateral rollover, with the possibility of modifying the geometry, the inertia of the tractor and the environmental boundary conditions, is proposed. The purpose is to define a method allowing the prediction of the elasto-plastic behaviour of the subsequent impacts occurring in the rollover phase. A tyre impact model capable of analysing the influence of the wheels on the energy to be absorbed by the ROPS has been also developed. Different tractor design parameters affecting the rollover behaviour, such as mass and dimensions, have been considered. This permitted the evaluation of their influence on the amount of energy to be absorbed by the ROPS. The mathematical model was designed and calibrated with respect to the results of actual lateral upset tests carried out on a narrow-track tractor. The dynamic behaviour of the tractor and the energy absorbed by the ROPS, obtained from the actual tests, showed to match the results of the model developed. The proposed approach represents a valuable tool in understanding the dynamics (kinetic energy) and kinematics (position, velocity, angular velocity, etc.) of the tractor in the phases of lateral rollover and the factors mainly affecting the event. The prediction of the amount of energy to be absorbed in some cases of accident is possible with good accuracy. It can then help in designing protective structures or active security devices.
Resumo:
Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.
Resumo:
The aim of this thesis is the subjective and objective evaluation of angledependent absorption coefficients. As the assumption of a constant absorption coefficient over the angle of incidence is not always held, a new model acknowledging an angle-dependent reflection must be considered, to get a more accurate prediction in the sound field. The study provides information about the behavior of different materials in several rooms, depending on the reflection modeling of incident sound waves. An objective evaluation was run for an implementation of angle-dependent reflection factors in the image source and ray tracing simulation models. Results obtained were analysed after comparison to diffuse-field averaged data. However, changes in acoustic characteristics of a room do not always mean a variation in the listener’s perception. Thus, additional subjective evaluation allowed a comparison between the different results obtained with the computer simulation and the response from the individuals who participated in the listening test. The listening test was designed following a three-alternative forced-choice (3AFC) paradigm. In each interaction asked to the subjects a sequence of either three pink noise bursts or three natural signals was alternated. These results were supposed to show the influence and perception of the two different ways to implement surface reflection –either with diffuse or angle-dependent absorption properties. Results show slightly audible effects when material properties were exaggerated. El objetivo de este trabajo es la evaluación objetiva y subjetiva del coeficiente de absorción en función del ángulo de incidencia de la onda de sonido. La suposición de un coeficiente de absorción constante con respecto al ángulo de incidencia no siempre se sostiene. Por ello, un nuevo modelo considerando la reflexión dependiente del ángulo se debe tener en cuenta para obtener predicciones más certeras en el campo del sonido. El estudio proporciona información sobre el comportamiento de diferentes materiales en distintos recintos, dependientes del modelo de reflexión de las ondas de sonido incidentes. Debido a las dificultades a la hora de realizar las medidas y, por lo tanto, a la falta de datos, los coeficientes de absorción dependientes del ángulo a menudo no se tienen en cuenta a la hora de realizar las simulaciones. Hoy en día, aún no hay una tendencia de aplicar el coeficiente de absorción dependiente del ángulo para mejorar los modelos de reflexión. Por otra parte, para una medición satisfactoria de la absorción dependiente del ángulo, sólo hay unos pocos métodos. Las técnicas de medición actuales llevan mucho tiempo y hay algunos materiales, condiciones y ángulos que no pueden ser reproducidos y, por lo tanto, no es posible su medición. Sin embargo, en el presente estudio, los ángulos de incidencia de las ondas de sonido son conocidos y almacenados en una de base de datos para cada uno de los materiales, de modo que los coeficientes de absorción para el ángulo dado pueden ser devueltos siempre que sean requeridos por el usuario. Para realizar el estudio se llevó a cabo una evaluación objetiva, por medio de la implementación del factor de reflexión dependiente del ángulo en los modelos de fuentes imagen y trazado de rayos. Los resultados fueron analizados después de ser comparados con el promedio de los datos obtenidos en medidas en el campo difuso. La simulación se hizo una vez se configuraron un número de materiales creados por el autor, a partir de los datos existentes en la literatura y los catálogos de fabricantes. Los modelos de Komatsu y Mechel sirvieron como referencia para los materiales porosos, configurando la resistividad al aire o el grosor, y para los paneles perforados, introduciendo el radio de los orificios y la distancia entre centros, respectivamente. Estos materiales se situaban en la pared opuesta a la que se consideraba que debía alojar a la fuente sonora. El resto de superficies se modelaban con el mismo material, variando su coeficiente de absorción y/o de dispersión. Al mismo tiempo, una serie de recintos fueron modelados para poder reproducir distintos escenarios de los que obtener los resultados. Sin embargo, los cambios en las características acústicas de un recinto no significan variaciones en la percepción por parte del oyente. Por ello, una evaluación subjetiva adicional permitió una comparación entre los diferentes resultados obtenidos mediante la simulación informática y la respuesta de los individuos que participaron en la prueba de escucha. Ésta fue diseñada bajo las pautas del modelo de test three-alternative forced-choice (3AFC), con treinta y dos preguntas diferentes. En cada iteración los sujetos fueron preguntados por una secuencia alterna entre tres señales, siendo dos de ellas iguales. Éstas podían ser tanto ráfagas de ruido rosa como señales naturales, en este test se utilizó un fragmento de una obra clásica interpretada por un piano. Antes de contestar al cuestionario, los bloques de preguntas eran ordenados al azar. Para cada ensayo, la mezcla era diferente, así los sujetos no repetían la misma prueba, evitando un sesgo por efectos de aprendizaje. Los bloques se barajaban recordando siempre el orden inicial, para después almacenar los resultados reordenados. La prueba de escucha fue realizada por veintitrés personas, toda ellas con conocimientos dentro del campo de la acústica. Antes de llevar a cabo la prueba de escucha en un entorno adecuado, una hoja con las instrucciones fue facilitada a cada persona. Los resultados muestran la influencia y percepción de las dos maneras distintas de implementar las reflexiones de una superficie –ya sea con respecto a la propiedad de difusión o de absorción dependiente del ángulo de los materiales. Los resultados objetivos, después de ejecutar las simulaciones, muestran los datos medios obtenidos para comprender el comportamiento de distintos materiales de acuerdo con el modelo de reflexión utilizado en el caso de estudio. En las tablas proporcionadas en la memoria se muestran los valores del tiempo de reverberación, la claridad y el tiempo de caída temprana. Los datos de las características del recinto obtenidos en este análisis tienen una fuerte dependencia respecto al coeficiente de absorción de los diferentes materiales que recubren las superficies del cuarto. En los resultados subjetivos, la media de percepción, a la hora de distinguir las distintas señales, por parte de los sujetos, se situó significativamente por debajo del umbral marcado por el punto de inflexión de la función psicométrica. Sin embargo, es posible concluir que la mayoría de los individuos tienden a ser capaces de detectar alguna diferencia entre los estímulos presentados en el 3AFC test. En conclusión, la hipótesis de que los valores del coeficiente de absorción dependiente del ángulo difieren es contrastada. Pero la respuesta subjetiva de los individuos muestra que únicamente hay ligeras variaciones en la percepción si el coeficiente varía en intervalos pequeños entre los valores manejados en la simulación. Además, si los parámetros de los materiales acústicos no son exagerados, los sujetos no perciben ninguna variación. Los primeros resultados obtenidos, proporcionando información respecto a la dependencia del ángulo, llevan a una nueva consideración en el campo de la acústica, y en la realización de nuevos proyectos en el futuro. Para futuras líneas de investigación, las simulaciones se deberían realizar con distintos tipos de recintos, buscando escenarios con geometrías irregulares. También, la implementación de distintos materiales para obtener resultados más certeros. Otra de las fases de los futuros proyectos puede realizarse teniendo en cuenta el coeficiente de dispersión dependiente del ángulo de incidencia de la onda de sonido. En la parte de la evaluación subjetiva, realizar una serie de pruebas de escucha con distintos individuos, incluyendo personas sin una formación relacionada con la ingeniería acústica.
Resumo:
Sandwich panels of laminated gypsum and rock wool have shown large pathology of cracking due to excessive slabs deflection. Currently the most widespread use of this material is as vertical elements of division or partition, with no structural function, what justifies that there are no studies on the mechanism of fracture and mechanical properties related to it. Therefore, and in order to reduce the cracking problem, it is necessary to progress in the simulation and prediction of the behaviour under tensile and shear load of such panels, although in typical applications have no structural responsability.