875 resultados para Sheet-metal work - Simulation methods
Resumo:
An initial review of the subject emphasises the need for improved fuel efficiency in vehicles and the possible role of aluminium in reducing weight. The problems of formability generally in manufacture and of aluminium in particular are discussed in the light of published data. A range of thirteen commercially available sheet aluminium alloys have been compared with respect to mechanical properties as these affect forming processes and behaviour in service. Four alloys were selected for detailed comparison. The formability and strength of these were investigated in terms of underlying mechanisms of deformation as well as the microstructural characteristics of the alloys including texture, particle dispersion, grain size and composition. In overall terms, good combinations of strength and ductility are achievable with alloys of the 2xxx and 6xxx series. Some specific alloys are notably better than others. The strength of formed components is affected by paint baking in the final stages of manufacture. Generally, alloys of the 6xxx family are strengthened while 2xxx and 5xxx become weaker. Some anomalous behaviour exists, however. Work hardening of these alloys appears to show rather abrupt decreases over certain strain ranges which is probably responsible for the relatively low strains at which both diffuse and local necking occur. Using data obtained from extended range tensile tests, the strain distribution in more complex shapes can be successfully modelled using finite element methods.Sheet failure during forming occurs by abrupt shear fracture in many instances. This condition is favoured by states of biaxial tension, surface defects in the form of fine scratches and certain types of crystallographic texture. The measured limit strains of the materials can be understood on the basis of attainment of a critical shear stress for fracture.
Resumo:
This study analyzes the validity of different Q-factor models in the BER estimation in RZ-DPSK transmission at 40 Gb/s channel rate. The impact of the duty cycle of the carrier pulses on the accuracy of the BER estimates through the different models has also been studied.
Resumo:
Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors.
Resumo:
This study analyzes the validity of different Q-factor models in the BER estimation in RZ-DPSK transmission at 40 Gb/s channel rate. The impact of the duty cycle of the carrier pulses on the accuracy of the BER estimates through the different models has also been studied.
Resumo:
Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors. © 2007 IEEE.
Resumo:
Certain theoretical and methodological problems of designing real-time dynamical expert systems, which belong to the class of the most complex integrated expert systems, are discussed. Primary attention is given to the problems of designing subsystems for modeling the external environment in the case where the environment is represented by complex engineering systems. A specific approach to designing simulation models for complex engineering systems is proposed and examples of the application of this approach based on the G2 (Gensym Corp.) tool system are described.
Resumo:
The finding that Pareto distributions are adequate to model Internet packet interarrival times has motivated the proposal of methods to evaluate steady-state performance measures of Pareto/D/1/k queues. Some limited analytical derivation for queue models has been proposed in the literature, but their solutions are often of a great mathematical challenge. To overcome such limitations, simulation tools that can deal with general queueing system must be developed. Despite certain limitations, simulation algorithms provide a mechanism to obtain insight and good numerical approximation to parameters of queues. In this work, we give an overview of some of these methods and compare them with our simulation approach, which are suited to solve queues with Generalized-Pareto interarrival time distributions. The paper discusses the properties and use of the Pareto distribution. We propose a real time trace simulation model for estimating the steady-state probability showing the tail-raising effect, loss probability, delay of the Pareto/D/1/k queue and make a comparison with M/D/1/k. The background on Internet traffic will help to do the evaluation correctly. This model can be used to study the long- tailed queueing systems. We close the paper with some general comments and offer thoughts about future work.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
In this work we give su±cient conditions for k-th approximations of the polynomial roots of f(x) when the Maehly{Aberth{Ehrlich, Werner-Borsch-Supan, Tanabe, Improved Borsch-Supan iteration methods fail on the next step. For these methods all non-attractive sets are found. This is a subsequent improvement of previously developed techniques and known facts. The users of these methods can use the results presented here for software implementation in Distributed Applications and Simulation Environ- ments. Numerical examples with graphics are shown.
Resumo:
Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work, we studied different aspects of nanoparticle beams. The work includes refinement of a model of the cluster formation mechanism, development of a new real-time, in situ cluster size measurement method, and study of the use of ICB in the fabrication of semiconductor devices.^ The formation process of the vaporized-metal cluster beam was simulated and investigated using classical nucleation theory and one dimensional gas flow equations. Zinc cluster sizes predicted at the nozzle exit are in good quantitative agreement with experimental results in our laboratory.^ A novel in situ real-time mass, energy and velocity measurement apparatus has been designed, built and tested. This small size time-of-flight mass spectrometer is suitable to be used in our cluster deposition systems and does not suffer from problems related to other methods of cluster size measurement like: requirement for specialized ionizing lasers, inductive electrical or electromagnetic coupling, dependency on the assumption of homogeneous nucleation, limits on the size measurement and non real-time capability. Measured ion energies using the electrostatic energy analyzer are in good accordance with values obtained from computer simulation. The velocity (v) is measured by pulsing the cluster beam and measuring the time of delay between the pulse and analyzer output current. The mass of a particle is calculated from m = (2E/v$\sp2).$ The error in the measured value of background gas mass is on the order of 28% of the mass of one N$\sb2$ molecule which is negligible for the measurement of large size clusters. This resolution in cluster size measurement is very acceptable for our purposes.^ Selective area deposition onto conducting patterns overlying insulating substrates was demonstrated using intense, fully-ionized cluster beams. Parameters influencing the selectivity are ion energy, repelling voltage, the ratio of the conductor to insulator dimension, and substrate thickness. ^
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87–100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling's T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
Damages during extreme wind events highlight the weaknesses of mechanical fasteners at the roof-to-wall connections in residential timber frame buildings. The allowable capacity of the metal fasteners is based on results of unidirectional component testing that do not simulate realistic tri-axial aerodynamic loading effects. The first objective of this research was to simulate hurricane effects and study hurricane-structure interaction at full-scale, facilitating better understanding of the combined impacts of wind, rain, and debris on inter-component connections at spatial and temporal scales. The second objective was to evaluate the performance of a non-intrusive roof-to-wall connection system using fiber reinforced polymer (FRP) materials and compare its load capacity to the capacity of an existing metal fastener under simulated aerodynamic loads. The Wall of Wind (WoW) testing performed using FRP connections on a one-story gable-roof timber structure instrumented with a variety of sensors, was used to create a database on aerodynamic and aero-hydrodynamic loading on roof-to-wall connections tested under several parameters: angles of attack, wind-turbulence content, internal pressure conditions, with and without effects of rain. Based on the aerodynamic loading results obtained from WoW tests, sets of three force components (tri-axial mean loads) were combined into a series of resultant mean forces, which were used to test the FRP and metal connections in the structures laboratory up to failure. A new component testing system and test protocol were developed for testing fasteners under simulated tri-axial loading as opposed to uni-axial loading. The tri-axial and uni-axial test results were compared for hurricane clips. Also, comparison was made between tri-axial load capacity of FRP and metal connections. The research findings demonstrate that the FRP connection is a viable option for use in timber roof-to-wall connection system. Findings also confirm that current testing methods of mechanical fasteners tend to overestimate the actual load capacities of a connector. Additionally, the research also contributes to the development a new testing protocol for fasteners using tri-axial simultaneous loads based on the aerodynamic database obtained from the WoW testing.
Resumo:
Structure, energetics and reactions of ions in the gas phase can be revealed by mass spectrometry techniques coupled to ions activation methods. Ions can gain enough energy for dissociation by absorbing IR light photons introduced by an IR laser to the mass spectrometer. Also collisions with a neutral molecule can increase the internal energy of ions and provide the dissociation threshold energy. Infrared multiple photon dissociation (IRMPD) or sustained off-resonance irradiation collision-induced dissociation (SORI-CID) methods are combined with Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometers where ions can be held at low pressures for a long time. The outcome of ion activation techniques especially when it is compared to the computational methods results is of great importance since it provides useful information about the structure, thermochemistry and reactivity of ions of interest. In this work structure, energetics and reactivity of metal cation complexes with dipeptides are investigated. Effect of metal cation size and charge as well as microsolvation on the structure of these complexes has been studied. Structures of bare and hydrated Na and Ca complexes with isomeric dipeptides AlaGly and GlyAla are characterized by means of IRMPD spectroscopy and computational methods. At the second step unimolecular dissociation reactions of singly charged and doubly charged multimetallic complexes of alkaline earth metal cations with GlyGly are examined by CID method. Also structural features of these complexes are revealed by comparing their IRMPD spectra with calculated IR spectra of possible structures. At last the unimolecular dissociation reactions of Mn complexes are studied. IRMPD spectroscopy along with computational methods is also employed for structural elucidation of Mn complexes. In addition the ion-molecule reactions of Mn complexes with CO and water are explored in the low pressures obtained in the ICR cell.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Resumo:
Carbon fibre reinforced polymers (CFRP) are increasingly being used in the aerospace, automotive and defence industry due to their high specific stiffness and good corrosion resistance. In a modern aircraft, 50-60% of its structure is made up of CFRP material while the remainder is mostly a combination of metallic alloys (typically aluminium or titanium alloys). Mechanical fastening (bolting or riveting) of CFRP and metallic components has thus created a pressing requirement of drilling several thousand holes per aircraft. Drilling of stacks in a single-shot not only saves time, but also ensures proper alignment when fasteners are inserted, achieving tighter geometric tolerances. However, this requirement poses formidable manufacturing challenges due to the fundamental differences in the material properties of CFRP and metals e.g. a drill bit entering into the stack encounters brittle and abrasive CFRP material as well as the plastic behaviour of the metallic alloy, making the drilling process highly non-linear.
Over the past few years substantial efforts have been made in this direction and majority of the research has tried to establish links between how the process parameters (feed, depth of cut, cutting speed), tooling (geometry, material and coating) and the wear of the cutting tool affect the hole quality. Similarly, multitudes of investigations have been conducted to determine the effects of non-traditional drilling methods (orbital, helical and vibration assisted drilling), cutting zone temperatures and efficiency of chip extraction on the hole quality and rate of tool wear during single shot drilling of CFRP/alloy stacks.
In a timely effort, this paper aims at reviewing the manufacturing challenges and barriers faced when drilling CFRP/alloy stacks and to summarise various factors influencing the drilling process while detailing the advances made in this fertile research area of single-shot drilling of stack materials. A survey of the key challenges associated with avoiding workpiece damage and the effect these challenges have on tool design and process optimisation is presented. An in depth critique of suitable hole making methods and their aptness for commercialisation follows. The paper concludes by summarising the future work required to achieve repeatable, high quality single shot drilled holes in CFRP/alloy stacks.