984 resultados para Micro-simulation
Resumo:
Friction in hydrodynamic bearings are a major source of losses in car engines ([69]). The extreme loading conditions in those bearings lead to contact between the matching surfaces. In such conditions not only the overall geometry of the bearing is relevant, but also the small-scale topography of the surface determines the bearing performance. The possibility of shaping the surface of lubricated bearings down to the micrometer ([57]) opened the question of whether friction can be reduced by mean of micro-textures, with mixed results. This work focuses in the development of efficient numerical methods to solve thin film (lubrication) problems down to the roughness scale of measured surfaces. Due to the high velocities and the convergent-divergent geometries of hydrodynamic bearings, cavitation takes place. To treat cavitation in the lubrication problem the Elrod- Adams model is used, a mass-conserving model which has proven in careful numerical ([12]) and experimental ([119]) tests to be essential to obtain physically meaningful results. Another relevant aspect of the modeling is that the bearing inertial effects are considered, which is necessary to correctly simulate moving textures. As an application, the effects of micro-texturing the moving surface of the bearing were studied. Realistic values are assumed for the physical parameters defining the problems. Extensive fundamental studies were carried out in the hydrodynamic lubrication regime. Mesh-converged simulations considering the topography of real measured surfaces were also run, and the validity of the lubrication approximation was assessed for such rough surfaces.
Resumo:
Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.
Resumo:
This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.
Resumo:
The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.
Resumo:
Particulate solids are complex redundant systems which consist of discrete particles. The interactions between the particles are complex and have been the subject of many theoretical and experimental investigations. Invetigations of particulate material have been restricted by the lack of quantitative information on the mechanisms occurring within an assembly. Laboratory experimentation is limited as information on the internal behaviour can only be inferred from measurements on the assembly boundary, or the use of intrusive measuring devices. In addition comparisons between test data are uncertain due to the difficulty in reproducing exact replicas of physical systems. Nevertheless, theoretical and technological advances require more detailed material information. However, numerical simulation affords access to information on every particle and hence the micro-mechanical behaviour within an assembly, and can replicate desired systems. To use a computer program to numerically simulate material behaviour accurately it is necessary to incorporte realistic interaction laws. This research programme used the finite difference simulation program `BALL', developed by Cundall (1971), which employed linear spring force-displacement laws. It was thus necessary to incorporate more realistic interaction laws. Therefore, this research programme was primarily concerned with the implementation of the normal force-displacement law of Hertz (1882) and the tangential force-displacement laws of Mindlin and Deresiewicz (1953). Within this thesis the contact mechanics theories employed in the program are developed and the adaptations which were necessary to incorporate these laws are detailed. Verification of the new contact force-displacement laws was achieved by simulating a quasi-static oblique contact and single particle oblique impact. Applications of the program to the simulation of large assemblies of particles is given, and the problems in undertaking quasi-static shear tests along with the results from two successful shear tests are described.
Resumo:
To examine the detailed operation of the power distribution network in a future more electric aircraft that employs electric actuation systems, a Micro-Cap SPICE simulation is developed for one of the essential buses. Particular attention is paid to model accurately the most important effects that influence system power quality. Representative system and flight data are used to illustrate the operation of the simulation and to assess the power quality conditions within the network as the flight control surfaces are deployed. The results illustrate the importance of correct cable sizing to ensure stable operation of actuators during transient conditions.
Resumo:
Microstructure manipulation is a fundamental process to the study of biology and medicine, as well as to advance micro- and nano-system applications. Manipulation of microstructures has been achieved through various microgripper devices developed recently, which lead to advances in micromachine assembly, and single cell manipulation, among others. Only two kinds of integrated feedback have been demonstrated so far, force sensing and optical binary feedback. As a result, the physical, mechanical, optical, and chemical information about the microstructure under study must be extracted from macroscopic instrumentation, such as confocal fluorescence microscopy and Raman spectroscopy. In this research work, novel Micro-Opto-Electro-Mechanical-System (MOEMS) microgrippers are presented. These devices utilize flexible optical waveguides as gripping arms, which provide the physical means for grasping a microobject, while simultaneously enabling light to be delivered and collected. This unique capability allows extensive optical characterization of the structure being held such as transmission, reflection, or fluorescence. The microgrippers require external actuation which was accomplished by two methods: initially with a micrometer screw, and later with a piezoelectric actuator. Thanks to a novel actuation mechanism, the "fishbone", the gripping facets remain parallel within 1 degree. The design, simulation, fabrication, and characterization are systematically presented. The devices mechanical operation was verified by means of 3D finite element analysis simulations. Also, the optical performance and losses were simulated by the 3D-to-2D effective index (finite difference time domain FDTD) method as well as 3D Beam Propagation Method (3D-BPM). The microgrippers were designed to manipulate structures from submicron dimensions up to approximately 100 μm. The devices were implemented in SU-8 due to its suitable optical and mechanical properties. This work demonstrates two practical applications: the manipulation of single SKOV-3 human ovarian carcinoma cells, and the detection and identification of microparts tagged with a fluorescent "barcode" implemented with quantum dots. The novel devices presented open up new possibilities in the field of micromanipulation at the microscale, scalable to the nano-domain.
Resumo:
The introduction of phase change material fluid and nanofluid in micro-channel heat sink design can significantly increase the cooling capacity of the heat sink because of the unique features of these two kinds of fluids. To better assist the design of a high performance micro-channel heat sink using phase change fluid and nanofluid, the heat transfer enhancement mechanism behind the flow with such fluids must be completely understood. A detailed parametric study is conducted to further investigate the heat transfer enhancement of the phase change material particle suspension flow, by using the two-phase non-thermal-equilibrium model developed by Hao and Tao (2004). The parametric study is conducted under normal conditions with Reynolds numbers of Re=600-900 and phase change material particle concentrations ¡Ü0.25 , as well as extreme conditions of very low Reynolds numbers (Re < 50) and high phase change material particle concentration (0.5-0.7) slurry flow. By using the two newly-defined parameters, named effectiveness factor and performance index, respectively, it is found that there exists an optimal relation between the channel design parameters, particle volume fraction, Reynolds number, and the wall heat flux. The influence of the particle volume fraction, particle size, and the particle viscosity, to the phase change material suspension flow, are investigated and discussed. The model was validated by available experimental data. The conclusions will assist designers in making their decisions that relate to the design or selection of a micro-pump suitable for micro or mini scale heat transfer devices. To understand the heat transfer enhancement mechanism of the nanofluid flow from the particle level, the lattice Boltzmann method is used because of its mesoscopic feature and its many numerical advantages. By using a two-component lattice Boltzmann model, the heat transfer enhancement of the nanofluid is analyzed, through incorporating the different forces acting on the nanoparticles to the two-component lattice Boltzmann model. It is found that the nanofluid has better heat transfer enhancement at low Reynolds numbers, and the Brownian motion effect of the nanoparticles will be weakened by the increase of flow speed.
Resumo:
Microstructure manipulation is a fundamental process to the study of biology and medicine, as well as to advance micro- and nano-system applications. Manipulation of microstructures has been achieved through various microgripper devices developed recently, which lead to advances in micromachine assembly, and single cell manipulation, among others. Only two kinds of integrated feedback have been demonstrated so far, force sensing and optical binary feedback. As a result, the physical, mechanical, optical, and chemical information about the microstructure under study must be extracted from macroscopic instrumentation, such as confocal fluorescence microscopy and Raman spectroscopy. In this research work, novel Micro-Opto-Electro-Mechanical-System (MOEMS) microgrippers are presented. These devices utilize flexible optical waveguides as gripping arms, which provide the physical means for grasping a microobject, while simultaneously enabling light to be delivered and collected. This unique capability allows extensive optical characterization of the structure being held such as transmission, reflection, or fluorescence. The microgrippers require external actuation which was accomplished by two methods: initially with a micrometer screw, and later with a piezoelectric actuator. Thanks to a novel actuation mechanism, the “fishbone”, the gripping facets remain parallel within 1 degree. The design, simulation, fabrication, and characterization are systematically presented. The devices mechanical operation was verified by means of 3D finite element analysis simulations. Also, the optical performance and losses were simulated by the 3D-to-2D effective index (finite difference time domain FDTD) method as well as 3D Beam Propagation Method (3D-BPM). The microgrippers were designed to manipulate structures from submicron dimensions up to approximately 100 µm. The devices were implemented in SU-8 due to its suitable optical and mechanical properties. This work demonstrates two practical applications: the manipulation of single SKOV-3 human ovarian carcinoma cells, and the detection and identification of microparts tagged with a fluorescent “barcode” implemented with quantum dots. The novel devices presented open up new possibilities in the field of micromanipulation at the microscale, scalable to the nano-domain.
Resumo:
Present work examines numerically the asymmetric behavior of hydrogen/air flame in a micro-channel subjected to a non-uniform wall temperature distribution. A high resolution (with cell size of 25 μm × 25 μm) of two-dimensional transient Navier–Stokes simulation is conducted in the low-Mach number formulation using detailed chemistry evolving 9 chemical species and 21 elementary reactions. Firstly, effects of hydrodynamic and diffusive-thermal instabilities are studied by performing the computations for different Lewis numbers. Then, the effects of preferential diffusion of heat and mass transfer on the asymmetric behavior of the hydrogen flame are analyzed for different inlet velocities and equivalence ratios. Results show that for the flames in micro-channels, interactions between thermal diffusion and molecular diffusion play major role in evolution of a symmetric flame into an asymmetric one. Furthermore, the role of Darrieus–Landau instability found to be minor. It is also found that in symmetric flames, the Lewis number decreases behind the flame front. This is related to the curvature of flame which leads to the inclination of thermal and mass fluxes. The mass diffusion vectors point toward the walls and the thermal diffusion vectors point toward the centerline. Asymmetric flame is observed when the length of flame front is about 1.1–1.15 times of the channel width.
Resumo:
Creative ways of utilising renewable energy sources in electricity generation especially in remote areas and particularly in countries depending on imported energy, while increasing energy security and reducing cost of such isolated off-grid systems, is becoming an urgently needed necessity for the effective strategic planning of Energy Systems. The aim of this research project was to design and implement a new decision support framework for the optimal design of hybrid micro grids considering different types of different technologies, where the design objective is to minimize the total cost of the hybrid micro grid while at the same time satisfying the required electric demand. Results of a comprehensive literature review, of existing analytical, decision support tools and literature on HPS, has identified the gaps and the necessary conceptual parts of an analytical decision support framework. As a result this research proposes and reports an Iterative Analytical Design Framework (IADF) and its implementation for the optimal design of an Off-grid renewable energy based hybrid smart micro-grid (OGREH-SμG) with intra and inter-grid (μG2μG & μG2G) synchronization capabilities and a novel storage technique. The modelling design and simulations were based on simulations conducted using HOMER Energy and MatLab/SIMULINK, Energy Planning and Design software platforms. The design, experimental proof of concept, verification and simulation of a new storage concept incorporating Hydrogen Peroxide (H2O2) fuel cell is also reported. The implementation of the smart components consisting Raspberry Pi that is devised and programmed for the semi-smart energy management framework (a novel control strategy, including synchronization capabilities) of the OGREH-SμG are also detailed and reported. The hybrid μG was designed and implemented as a case study for the Bayir/Jordan area. This research has provided an alternative decision support tool to solve Renewable Energy Integration for the optimal number, type and size of components to configure the hybrid μG. In addition this research has formulated and reported a linear cost function to mathematically verify computer based simulations and fine tune the solutions in the iterative framework and concluded that such solutions converge to a correct optimal approximation when considering the properties of the problem. As a result of this investigation it has been demonstrated that, the implemented and reported OGREH-SμG design incorporates wind and sun powered generation complemented with batteries, two fuel cell units and a diesel generator is a unique approach to Utilizing indigenous renewable energy with a capability of being able to synchronize with other μ-grids is the most effective and optimal way of electrifying developing countries with fewer resources in a sustainable way, with minimum impact on the environment while also achieving reductions in GHG. The dissertation concludes with suggested extensions to this work in the future.
Resumo:
International audience
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.