989 resultados para Calculation process
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI.
Resumo:
Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.
Resumo:
The objective of this paper is to suggest a method that accounts for the impact of the volatility smile dynamics when performing scenario analysis for a portfolio consisting of vanilla options. As the volatility smile is documented to change at least with the level of implied at-the-money volatility, a suitable model is here included in the calculation process of the simulated market scenarios. By constructing simple portfolios of index options and comparing the ex ante risk exposure measured using different pricing methods to realized market values, ex post, the improvements of the incorporation of the model are monitored. The analyzed examples in the study generate results that statistically support that the most accurate scenarios are those calculated using the model accounting for the dynamics of the smile. Thus, we show that the differences emanating from the volatility smile are apparent and should be accounted for and that the methodology presented herein is one suitable alternative for doing so.
Resumo:
This paper provides a numerical approach on achieving the limit equilibrium method for 3D slope stability analysis proposed in the theoretical part of the previous paper. Some programming techniques are presented to ensure the maneuverability of the method. Three examples are introduced to illustrate the use of this method. The results are given in detail such as the local factor of safety and local potential sliding direction for a slope. As the method is an extension of 2D Janbu's generalized procedure of slices (GPS), the results obtained by GPS for the longitudinal sections of a slope are also given for comparison with the 3D results. A practical landslide in Yunyang, the Three Gorges, of China, is also analyzed by the present method. Moreover, the proposed method has the advantages and disadvantages of GPS. The problem frequently encountered in calculation process is still about the convergency, especially in analyzing the stability of a cutting corner. Some advice on discretization is given to ensure convergence when the present method is used. However, the problem about convergency still needs to be further explored based on the rigorous theoretical background.
Resumo:
In this paper is described a novel technique for producing an electro-optical intensity synthesizer which can generate different periodic time domain waveforms through only sine or cosine wave applied-voltages. The synthesizer presented here consists of a series of stages between two polarizers, with each stage consisting of an electro-optic element and a compensator. Every electro-optical element has the same applied-voltage function but different azimuth angles and ratios between the longitudinal and transverse lengths. The main principle is the synthesis of an electro-optic effect and a polarization interference effect in the time domain. This technique is based on an expanded Fourier positive-direction searching algorithm, which can not only simplify the calculation process but also produces many choices of structural parameters for different waveforms generation. A three-stage synthesis of an electro-optical birefringent system for continuous square waveform is undertaken to prove the principle.
Resumo:
Este relatório é elaborado no âmbito do estágio curricular no regime de parceria entre o Instituto Superior de Engenharia do Porto e o gabinete de projetos SE2P – Sociedade de Engenharia, Projetos e Planeamento, Lda. O tema do estágio é o estudo de colunas mistas aço-betão na vertente de projeto, mais concretamente através da criação dum processo de cálculo automático integrado com o modelo estrutural global. Aplicam-se os métodos de dimensionamento dos eurocódigos estruturais na avaliação da segurança dos estados limites. Avaliam-se as propriedades geométricas, determina-se a curva de interação, calcula-se a posição do eixo neutro donde se inferem os estados de tensão em flexão desviada ao nível da secção, necessários às verificações de segurança. O processo de cálculo tem por base uma folha de cálculo desenvolvida em Excel que, com ajuda do Visual Basic for Applications, comunica com o software de modelação e cálculo estrutural Autodesk Robot Professional Analisys. Cria-se fluxos de informação que possibilitam o cálculo iterativo de colunas mistas permitindo ajustar e por esta via otimizar as soluções de dimensionamento. A exportação de relatórios de cálculo detalhados para cada coluna constitui uma mais-valia para gabinetes de engenharia quando comparada com outras soluções de dimensionamento existentes no mercado. A aplicação do processo de cálculo desenvolvido a múltiplas combinações de soluções de colunas mistas permitiu analisar comparativamente os resultados obtidos e a criação de tabelas e ábacos que poderão ser úteis em cenários de pré-dimensionamento e de anteprojeto.
Resumo:
The possibility to develop automatically running models which can capture some of the most important factors driving the urban climate would be very useful for many planning aspects. With the help of these modulated climate data, the creation of the typically used “Urban Climate Maps” (UCM) will be accelerated and facilitated. This work describes the development of a special ArcGIS software extension, along with two support databases to achieve this functionality. At the present time, lacking comparability between different UCMs and imprecise planning advices going along with the significant technical problems of manually creating conventional maps are central issues. Also inflexibility and static behaviour are reducing the maps’ practicality. From experi-ence, planning processes are formed more productively, namely to implant new planning parameters directly via the existing work surface to map the impact of the data change immediately, if pos-sible. In addition to the direct climate figures, information of other planning areas (like regional characteristics / developments etc.) have to be taken into account to create the UCM as well. Taking all these requirements into consideration, an automated calculation process of urban climate impact parameters will serve to increase the creation of homogenous UCMs efficiently.
Resumo:
The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.
Resumo:
Oceans are key sources and sinks in the global budgets of significant atmospheric trace gases, termed Volatile Organic Compounds (VOCs). Despite their low concentrations, these species have an important role in the atmosphere, influencing ozone photochemistry and aerosol physics. Surprisingly, little work has been done on assessing their emissions or transport mechanisms and rates between ocean and atmosphere, all of which are important when modelling the atmosphere accurately.rnA new Needle Trap Device (NTD) - GC-MS method was developed for the effective sampling and analysis of VOCs in seawater. Good repeatability (RSDs <16 %), linearity (R2 = 0.96 - 0.99) and limits of detection in the range of pM were obtained for DMS, isoprene, benzene, toluene, p-xylene, (+)-α-pinene and (-)-α-pinene. Laboratory evaluation and subsequent field application indicated that the proposed method can be used successfully in place of the more usually applied extraction techniques (P&T, SPME) to extend the suite of species typically measured in the ocean and improve detection limits. rnDuring a mesocosm CO2 enrichment study, DMS, isoprene and α-pinene were identified and quantified in seawater samples, using the above mentioned method. Based on correlations with available biological datasets, the effects of ocean acidification as well as possible ocean biological sources were investigated for all examined compounds. Future ocean's acidity was shown to decrease oceanic DMS production, possibly impact isoprene emissions but not affect the production of α-pinene. rnIn a separate activity, ocean - atmosphere interactions were simulated in a large scale wind-wave canal facility, in order to investigate the gas exchange process and its controlling mechanisms. Air-water exchange rates of 14 chemical species (of which 11 VOCs) spanning a wide range of solubility (dimensionless solubility, α = 0:4 to 5470) and diffusivity (Schmidt number in water, Scw = 594 to 1194) were obtained under various turbulent (wind speed at ten meters height, u10 = 0:8 to 15ms-1) and surfactant modulated (two different sized Triton X-100 layers) surface conditions. Reliable and reproducible total gas transfer velocities were obtained and the derived values and trends were comparable to previous investigations. Through this study, a much better and more comprehensive understanding of the gas exchange process was accomplished. The role of friction velocity, uw* and mean square slope, σs2 in defining phenomena such as waves and wave breaking, near surface turbulence, bubbles and surface films was recognized as very significant. uw* was determined as the ideal turbulent parameter while σs2 described best the related surface conditions. A combination of both uw* and σs2 variables, was found to reproduce faithfully the air-water gas exchange process. rnA Total Transfer Velocity (TTV) model provided by a compilation of 14 tracers and a combination of both uw* and σs2 parameters, is proposed for the first time. Through the proposed TTV parameterization, a new physical perspective is presented which provides an accurate TTV for any tracer within the examined solubility range. rnThe development of such a comprehensive air-sea gas exchange parameterization represents a highly useful tool for regional and global models, providing accurate total transfer velocity estimations for any tracer and any sea-surface status, simplifying the calculation process and eliminating inevitable calculation uncertainty connected with the selection or combination of different parameterizations.rnrn
Resumo:
Correct modeling of the equivalent circuits regarding solar cell and panels is today an essential tool for power optimization. However, the parameter extraction of those circuits is still a quite difficult task that normally requires both experimental data and calculation procedures, generally not available to the normal user. This paper presents a new analytical method that easily calculates the equivalent circuit parameters from the data that manufacturers usually provide. The analytical approximation is based on a new methodology, since methods developed until now to obtain the aforementioned equivalent circuit parameters from manufacturer's data have always been numerical or heuristic. Results from the present method are as accurate as the ones resulting from other more complex (numerical) existing methods in terms of calculation process and resources.
Resumo:
The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Mode of access: Internet.