988 resultados para Simulation tools
Resumo:
The present work shows an experimental and theoretical study on heat flow when end milling, at high-speed, hardened steels applied to moulds and dies. AISI H13 and AISI D2 steels were machined with two types of ball nose end mills: coated with (TiAl)N and tipped with PcBN. The workpiece geometry was designed to simulate tool-workpiece interaction in real situations found in mould industries, in which complex surfaces and thin walls are commonly machined. The compressed and cold air cooling systems were compared to dry machining Results indicated a relatively small temperature variation, with higher range when machining AISI D2 with PcBN-tipped end mill. All cooling systems used demonstrated good capacity to remove heat from the machined surface, especially the cold air. Compressed air was the most indicated to keep workpiece at relatively stable temperature. A theoretical model was also proposed to estimate the energy transferred to the workpiece (Q) and the average convection coefficient ((h) over bar) for the cooling systems used. The model used a FEM simulation and a steepest decent method to find the best values for both variables. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The C 2 * radical is used as a system probe tool to the reactive flow diagnostic, and it was chosen due to its large occurrence in plasma and combustion in aeronautics and aerospace applications. The rotational temperatures of C 2 * species were determined by the comparison between experimental and theoretical data. The simulation code was developed by the authors, using C++ language and the object oriented paradigm, and it includes a set of new tools that increase the efficacy of the C 2 * probe to determine the rotational temperature of the system. A brute force approach for the determination of spectral parameters was adopted in this version of the computer code. The statistical parameter c 2 was used as an objective criterion to determine the better match of experimental and synthesized spectra. The results showed that the program works even with low-quality experimental data, typically collected from in situ airborne compact apparatus. The technique was applied to flames of a Bunsen burner, and the rotational temperature of ca. 2100 K was calculated.
Resumo:
Once defined the relationship between the Starter Motor components and their functions, it is possible to develop a mathematical model capable to predict the Starter behavior during operation. One important aspect is the engagement system behavior. The development of a mathematical tool capable of predicting it is a valuable step in order to reduce the design time, cost and engineering efforts. A mathematical model, represented by differential equations, can be developed using physics laws, evaluating force balance and energy flow through the systems degrees of freedom. Another important physical aspect to be considered in this modeling is the impact conditions (particularly on the pinion and ring-gear contact). This work is a report of those equations application on available mathematical software and the resolution of those equations by Runge-Kutta's numerical integration method, in order to build an accessible engineering tool. Copyright © 2011 SAE International.
Resumo:
The main goal of the present work is to verify the applicability of the Immersed Boundary Method together with the Virtual Physical Model to solve the flow through automatic valves of hermetic compressors. The valve was simplified to a two-dimensional radial diffuser, with diameter ratio of D/d = 1.5, and simulated for a one cycle of opening and closing process with a imposed velocity of 3.0 cm/s for the reed, dimensionless gap between disks in the range of 0.07 < s/d < 0.10, and inlet Reynolds number equal to 1500. The good results obtained showed that the methodology has great potential as project tool for this type of valve systems. © The Authors, 2011.
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.
Resumo:
Atualmente existem inúmeros processos produtivos automatizados, os quais vêm se tornando cada vez mais complexos em função das necessidades do mundo moderno e, portanto, demandam nas fases de projetos e de implementação ferramentas de engenharia cada vez mais poderosas para modelá-los e analisá-los da maneira mais eficiente possível. Nesse ambiente de crescente pressão por resultados positivos, racionalização e aprimoramento de recursos internos, que a ferramenta computacional Tecnomatix Plant Simulation 9.0, surge como um caminho para a obtenção de competitividade produtiva. Ressaltando ainda que, o estudo proposto é de grande relevância para os profissionais da gestão produtiva, os quais almejam resultados que minimizem custos e maximizem lucros.
Resumo:
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
The installation of induction distributed generators should be preceded by a careful study in order to determine if the point of common coupling is suitable for transmission of the generated power, keeping acceptable power quality and system stability. In this sense, this paper presents a simple analytical formulation that allows a fast and comprehensive evaluation of the maximum power delivered by the induction generator, without losing voltage stability. Moreover, this formulation can be used to identify voltage stability issues that limit the generator output power. All the formulation is developed by using the equivalent circuit of squirrel-cage induction machine. Simulation results are used to validate the method, which enables the approach to be used as a guide to reduce the simulation efforts necessary to assess the maximum output power and voltage stability of induction generators. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
[EN] This paper proposes the incorporation of engineering knowledge through both (a) advanced state-of-the-art preference handling decision-making tools integrated in multiobjective evolutionary algorithms and (b) engineering knowledge-based variance reduction simulation as enhancing tools for the robust optimum design of structural frames taking uncertainties into consideration in the design variables.The simultaneous minimization of the constrained weight (adding structuralweight and average distribution of constraint violations) on the one hand and the standard deviation of the distribution of constraint violation on the other are handled with multiobjective optimization-based evolutionary computation in two different multiobjective algorithms. The optimum design values of the deterministic structural problem in question are proposed as a reference point (the aspiration level) in reference-point-based evolutionary multiobjective algorithms (here g-dominance is used). Results including
Resumo:
The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.
Resumo:
The aim of this thesis is the elucidation of structure-properties relationship of molecular semiconductors for electronic devices. This involves the use of a comprehensive set of simulation techniques, ranging from quantum-mechanical to numerical stochastic methods, and also the development of ad-hoc computational tools. In more detail, the research activity regarded two main topics: the study of electronic properties and structural behaviour of liquid crystalline (LC) materials based on functionalised oligo(p-phenyleneethynylene) (OPE), and the investigation on the electric field effect associated to OFET operation on pentacene thin film stability. In this dissertation, a novel family of substituted OPE liquid crystals with applications in stimuli-responsive materials is presented. In more detail, simulations can not only provide evidence for the characterization of the liquid crystalline phases of different OPEs, but elucidate the role of charge transfer states in donor-acceptor LCs containing an endohedral metallofullerene moiety. Such systems can be regarded as promising candidates for organic photovoltaics. Furthermore, exciton dynamics simulations are performed as a way to obtain additional information about the degree of order in OPE columnar phases. Finally, ab initio and molecular mechanics simulations are used to investigate the influence of an applied electric field on pentacene reactivity and stability. The reaction path of pentacene thermal dimerization in the presence of an external electric field is investigated; the results can be related to the fatigue effect observed in OFETs, that show significant performance degradation even in the absence of external agents. In addition to this, the effect of the gate voltage on a pentacene monolayer are simulated, and the results are then compared to X-ray diffraction measurements performed for the first time on operating OFETs.
Resumo:
In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups.
Resumo:
The spatio-temporal control of gene expression is fundamental to elucidate cell proliferation and deregulation phenomena in living systems. Novel approaches based on light-sensitive multiprotein complexes have recently been devised, showing promising perspectives for the noninvasive and reversible modulation of the DNA-transcriptional activity in vivo. This has lately been demonstrated in a striking way through the generation of the artificial protein construct light-oxygen-voltage (LOV)-tryptophan-activated protein (TAP), in which the LOV-2-Jα photoswitch of phototropin1 from Avena sativa (AsLOV2-Jα) has been ligated to the tryptophan-repressor (TrpR) protein from Escherichia coli. Although tremendous progress has been achieved on the generation of such protein constructs, a detailed understanding of their functioning as opto-genetical tools is still in its infancy. Here, we elucidate the early stages of the light-induced regulatory mechanism of LOV-TAP at the molecular level, using the noninvasive molecular dynamics simulation technique. More specifically, we find that Cys450-FMN-adduct formation in the AsLOV2-Jα-binding pocket after photoexcitation induces the cleavage of the peripheral Jα-helix from the LOV core, causing a change of its polarity and electrostatic attraction of the photoswitch onto the DNA surface. This goes along with the flexibilization through unfolding of a hairpin-like helix-loop-helix region interlinking the AsLOV2-Jα- and TrpR-domains, ultimately enabling the condensation of LOV-TAP onto the DNA surface. By contrast, in the dark state the AsLOV2-Jα photoswitch remains inactive and exerts a repulsive electrostatic force on the DNA surface. This leads to a distortion of the hairpin region, which finally relieves its tension by causing the disruption of LOV-TAP from the DNA.
Resumo:
PURPOSE: To compare objective fellow and expert efficiency indices for an interventional radiology renal artery stenosis skill set with the use of a high-fidelity simulator. MATERIALS AND METHODS: The Mentice VIST simulator was used for three different renal artery stenosis simulations of varying difficulty, which were used to grade performance. Fellows' indices at three intervals throughout 1 year were compared to expert baseline performance. Seventy-four simulated procedures were performed, 63 of which were captured as audiovisual recordings. Three levels of fellow experience were analyzed: 1, 6, and 12 months of dedicated interventional radiology fellowship. The recordings were compiled on a computer workstation and analyzed. Distinct measurable events in the procedures were identified with task analysis, and data regarding efficiency were extracted. Total scores were calculated as the product of procedure time, fluoroscopy time, tools, and contrast agent volume. The lowest scores, which reflected efficient use of tools, radiation, and time, were considered to indicate proficiency. Subjective analysis of participants' procedural errors was not included in this analysis. RESULTS: Fellows' mean scores diminished from 1 month to 12 months (42,960 at 1 month, 18,726 at 6 months, and 9,636 at 12 months). The experts' mean score was 4,660. In addition, the range of variance in score diminished with increasing experience (from a range of 5,940-120,156 at 1 month to 2,436-85,272 at 6 months and 2,160-32,400 at 12 months). Expert scores ranged from 1,450 to 10,800. CONCLUSIONS: Objective efficiency indices for simulated procedures can demonstrate scores directly comparable to the level of clinical experience.