918 resultados para COMPUTER-SIMULATION
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
Molecular-level computer simulations of restricted water diffusion can be used to develop models for relating diffusion tensor imaging measurements of anisotropic tissue to microstructural tissue characteristics. The diffusion tensors resulting from these simulations can then be analyzed in terms of their relationship to the structural anisotropy of the model used. As the translational motion of water molecules is essentially random, their dynamics can be effectively simulated using computers. In addition to modeling water dynamics and water-tissue interactions, the simulation software of the present study was developed to automatically generate collagen fiber networks from user-defined parameters. This flexibility provides the opportunity for further investigations of the relationship between the diffusion tensor of water and morphologically different models representing different anisotropic tissues.
Resumo:
Considering ultrasound propagation through complex composite media as an array of parallel sonic rays, a comparison of computer simulated prediction with experimental data has previously been reported for transmission mode (where one transducer serves as transmitter, the other as receiver) in a series of ten acrylic step-wedge samples, immersed in water, exhibiting varying degrees of transit time inhomogeneity. In this study, the same samples were used but in pulse-echo mode, where the same ultrasound transducer served as both transmitter and receiver, detecting both ‘primary’ (internal sample interface) and ‘secondary’ (external sample interface) echoes. A transit time spectrum (TTS) was derived, describing the proportion of sonic rays with a particular transit time. A computer simulation was performed to predict the transit time and amplitude of various echoes created, and compared with experimental data. Applying an amplitude-tolerance analysis, 91.7±3.7% of the simulated data was within ±1 standard deviation (STD) of the experimentally measured amplitude-time data. Correlation of predicted and experimental transit time spectra provided coefficients of determination (R2) ranging from 100.0% to 96.8% for the various samples tested. The results acquired from this study provide good evidence for the concept of parallel sonic rays. Further, deconvolution of experimental input and output signals has been shown to provide an effective method to identify echoes otherwise lost due to phase cancellation. Potential applications of pulse-echo ultrasound transit time spectroscopy (PE-UTTS) include improvement of ultrasound image fidelity by improving spatial resolution and reducing phase interference artefacts.
Resumo:
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.
Resumo:
Biomedical engineering solutions like surgical simulators need High Performance Computing (HPC) to achieve real-time performance. Graphics Processing Units (GPUs) offer HPC capabilities at low cost and low power consumption. In this work, it is demonstrated that a liver which is discretized by about 2500 finite element nodes, can be graphically simulated in realtime, by making use of a GPU. Present work takes into consideration the time needed for the data transfer from CPU to GPU and back from GPU to CPU. Although behaviour of liver is very complicated, present computer simulation assumes linear elastostatics. One needs to use the commercial software ANSYS to obtain the global stiffness matrix of the liver. Results show that GPUs are useful for the real-time graphical simulation of liver, which in turn is needed in simulators that are used for training surgeons in laparoscopic surgery. Although the computer simulation should involve rendering also, neither rendering, nor the time needed for rendering and displaying the liver on a screen, is considered in the present work. The present work is just a demonstration of a concept; the concept is not really implemented and validated. Future work is to develop software which can accomplish real-time and very realistic graphical simulation of liver, with rendered image of liver on the screen changing in real-time according to the position of the surgical tool tip approximated as the mouse cursor in 3D.
Resumo:
A mathematical model has been developed for the gas carburising (diffusion) process using finite volume method. The computer simulation has been carried out for an industrial gas carburising process. The model's predictions are in good agreement with industrial experimental data and with data collected from the literature. A study of various mass transfer and diffusion coefficients has been carried out in order to suggest which correlations should be used for the gas carburising process. The model has been interfaced in a Windows environment using a graphical user interface. In this way, the model is extremely user friendly. The sensitivity analysis of various parameters such as initial carbon concentration in the specimen, carbon potential of the atmosphere, temperature of the process, etc. has been carried out using the model.
Resumo:
We present computer simulation study of two-dimensional infrared spectroscopy (2D-IR) of water confined in reverse micelles (RMs) of various sizes. The present study is motivated by the need to understand the altered dynamics of confined water by performing layerwise decomposition of water, with an aim to quantify the relative contributions of different layers water molecules to the calculated 2D-IR spectrum. The 0-1 transition spectra clearly show substantial elongation, due to in-homogeneous broadening and incomplete spectral diffusion, along the diagonal in the surface water layer of different sized RMs. Fitting of the frequency fluctuation correlation functions reveal that the motion of the surface water molecules is sub-diffusive and indicate the constrained nature of their dynamics. This is further supported by two peak nature of the angular analogue of van Hove correlation function. With increasing system size, the water molecules become more diffusive in nature and spectral diffusion almost completes in the central layer of the larger size RMs. Comparisons between experiments and simulations establish the correspondence between the spectral decomposition available in experiments with the spatial decomposition available in simulations. Simulations also allow a quantitative exploration of the relative role of water, sodium ions, and sulfonate head groups in vibrational dephasing. Interestingly, the negative cross correlation between force on oxygen and hydrogen of O-H bond in bulk water significantly decreases in the surface layer of each RM. This negative cross correlation gradually increases in the central water pool with increasing RMs size and this is found to be partly responsible for the faster relaxation rate of water in the central pool. (C) 2013 AIP Publishing LLC.
Resumo:
The physical vapor transport (PVT) method is being widely used to grow large-size single SiC crystals. The growth process is associated with heat and mass transport in the growth chamber, chemical reactions among multiple species as well as phase change at the crystal/gas interface. The current paper aims at studying and verifying the transport mechanism and growth kinetics model by demonstrating the flow field and species concentration distribution in the growth system. We have developed a coupled model, which takes into account the mass transport and growth kinetics. Numerical simulation is carried out by employing an in-house developed software based on finite volume method. The results calculated are in good agreement with the experimental observation.
Resumo:
A correlative reference model for a computer simulation of molecular dynamics is proposed in this paper. Based on this model, a flexible displacement boundary scheme is naturally introduced and the dislocations emitted from a crack tip are presumed to continuously pass through the border of an inner discrete atomic region to pile up at an outer continuum region. The simulations for a Mo crystal show that the interaction between a crack and emitted dislocations results in the decrease in local stress intensity factor gradually.
Resumo:
The interaction of a dislocation array emitted from a crack tip under mode II loading with asymmetric tilt grain boundaries (GBs) is analysed by the molecular dynamics method. The GBs can generally be described by planar and linear matching zones and unmatching zones. All GBs are observed to emit dislocations. The GBs migrated easily due to their planar and linear matching structure and asymmetrical type. The diffusion induced by stress concentration is found to promote the GB migration. The transmissions of dislocations are either along the matched plane or along another plane depending on tilt angle theta. Alternate processes of stress concentration and stress relaxation take place ahead of the pileup. The stress concentration can be released either by transmission of dislocations, by atom diffusion along GBs, or by migration of GBs by formation of twinning bands. The simulated results also unequivocally demonstrate two processes, i.e. asymmetrical GBs evolving into symmetrical ones and unmatching zones evolving into matching ones during the loading process.
Resumo:
ENGLISH: Three distinct versions of TUNP0P, an age-structured computer simulation model of the eastern Pacific yellowfin tuna, Thunnus albacores, stock and surface tuna fishery, are used to reveal mechanisms which appear to have a significant effect on the fishery dynamics. Real data on this fishery are used to make deductions on the distribution of the fish and to show how that distribution might influence events in the fishery. The most important result of the paper is that the concept of the eastern Pacific yellowfin tuna stock as a homogeneous unit is inadequate to represent the recent history of the fishery. Inferences are made on the size and distribution of the underlying stock as well as its potential yield to the surface fishery as a result of alterations in the level and distribution of the effort. SPANISH: Se han empleado tres versiones diferentes de TUNP0P, un modelo de simulación de la computadora (basado en la estructura de la edad) de la población y la pesca epipelágica del atún aleta amarilla, Tbunnus albacares, del Pacífico oriental, para revelar los mecanismos que parecen tener un efecto importante en la dinámica pesquera. Se emplean los datos verdaderos de esta pesca para hacer deducciones sobre la distribución de los peces y para mostrar cómo puede influir esta distribución en los eventos de pesca. La conclusión más importante de este estudio es que el concepto de que la población del aleta amarilla del Pacífico oriental es una unidad homogénea, es inadecuado para representar la historia reciente de pesca. Se teoriza sobre la talla y distribución de la población subyacente como también sobre su producción potencial en la pesca epipelágica al cambiar el nivel y distribución del esfuerzo.
Resumo:
The imaging technology of stimulated emission depletion (STED) utilizes the nonlinearity relationship between the fluorescence saturation and the excited state stimulated depletion. It implements three-dimensional (3D) imaging and breaks the diffraction barrier of far-field light microscopy by restricting fluorescent molecules at a sub-diffraction spot. In order to improve the resolution which attained by this technology, the computer simulation on temporal behavior of population probabilities of the sample was made in this paper, and the optimized parameters such as intensity, duration and delay time of the STED pulse were given.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Based on the Coulomb friction model, the frictional motion model of workpiece relating to the polishing pad was presented in annular polishing. By the dynamic analysis software, the model was simulated and analysed. The conclusions from the results were that the workpiece did not rotate steadily. When the angular velocity of ring and the direction were the same as that of the polishing pad, the angular velocity of workpiece hoicked at the beginning and at the later stage were the same as that of the polishing pad before contacting with the ring. The angular velocity of workpiece vibrated at the moment of contacting with the ring. After that the angular velocity of workpiece increased gradually and fluctuated at a given value, while the angular velocity of ring decreased gradually and also fluctuated at a given value. Since the contact between the workpiece and the ring was linear, their linear velocities and directions should be the same. But the angular velocity of workpiece was larger than that of the polishing pad on the condition that the radius of the workpiece was less than that of the ring. This did not agree with the pure translation principle and the workpiece surface could not be flat, either. Consequently, it needed to be controlled with the angular velocity of ring and the radii of the ring and the workpiece, besides friction to make the angular velocity of workpiece equal to that of the polishing pad for obtaining fine surface flatness of the workpiece. Copyright © 2007 Inderscience Enterprises Ltd.}