992 resultados para forward simulation
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
A new probabilistic neural network (PNN) learning algorithm based on forward constrained selection (PNN-FCS) is proposed. An incremental learning scheme is adopted such that at each step, new neurons, one for each class, are selected from the training samples arid the weights of the neurons are estimated so as to minimize the overall misclassification error rate. In this manner, only the most significant training samples are used as the neurons. It is shown by simulation that the resultant networks of PNN-FCS have good classification performance compared to other types of classifiers, but much smaller model sizes than conventional PNN.
Resumo:
Stereoscopic white-light imaging of a large portion of the inner heliosphere has been used to track interplanetary coronal mass ejections. At large elongations from the Sun, the white-light brightness depends on both the local electron density and the efficiency of the Thomson-scattering process. To quantify the effects of the Thomson-scattering geometry, we study an interplanetary shock using forward magnetohydrodynamic simulation and synthetic white-light imaging. Identifiable as an inclined streak of enhanced brightness in a time–elongation map, the travelling shock can be readily imaged by an observer located within a wide range of longitudes in the ecliptic. Different parts of the shock front contribute to the imaged brightness pattern viewed by observers at different longitudes. Moreover, even for an observer located at a fixed longitude, a different part of the shock front will contribute to the imaged brightness at any given time. The observed brightness within each imaging pixel results from a weighted integral along its corresponding ray-path. It is possible to infer the longitudinal location of the shock from the brightness pattern in an optical sky map, based on the east–west asymmetry in its brightness and degree of polarisation. Therefore, measurement of the interplanetary polarised brightness could significantly reduce the ambiguity in performing three-dimensional reconstruction of local electron density from white-light imaging.
Resumo:
This work proposes a method to objectively determine the most suitable analogue redesign method for forward type converters under digital voltage mode control. Particular emphasis is placed on determining the method which allows the highest phase margin at the particular switching and crossover frequencies chosen by the designer. It is shown that at high crossover frequencies with respect to switching frequency, controllers designed using backward integration have the largest phase margin; whereas at low crossover frequencies with respect to switching frequency, controllers designed using bilinear integration have the largest phase margins. An accurate model of the power stage is used for simulation, and experimental results from a Buck converter are collected. The performance of the digital controllers is compared to that of the equivalent analogue controller both in simulation and experiment. Excellent correlation between the simulation and experimental results is presented. This work will allow designers to confidently choose the analogue redesign method which yields the greater phase margin for their application.
Resumo:
Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.
Resumo:
This study puts forward a method to model and simulate the complex system of hospital on the basis of multi-agent technology. The formation of the agents of hospitals with intelligent and coordinative characteristics was designed, the message object was defined, and the model operating mechanism of autonomous activities and coordination mechanism was also designed. In addition, the Ontology library and Norm library etc. were introduced using semiotic method and theory, to enlarge the method of system modelling. Swarm was used to develop the multi-agent based simulation system, which is favorable for making guidelines for hospital's improving it's organization and management, optimizing the working procedure, improving the quality of medical care as well as reducing medical charge costs.
Resumo:
This article proposes a systematic approach to determine the most suitable analogue redesign method to be used for forward-type converters under digital voltage mode control. The focus of the method is to achieve the highest phase margin at the particular switching and crossover frequencies chosen by the designer. It is shown that at high crossover frequencies with respect to switching frequency, controllers designed using backward integration have the largest phase margin; whereas at low crossover frequencies with respect to switching frequency, controllers designed using bilinear integration with pre-warping have the largest phase margins. An algorithm has been developed to determine the frequency of the crossing point where the recommended discretisation method changes. An accurate model of the power stage is used for simulation and experimental results from a Buck converter are collected. The performance of the digital controllers is compared to that of the equivalent analogue controller both in simulation and experiment. Excellent closeness between the simulation and experimental results is presented. This work provides a concrete example to allow academics and engineers to systematically choose a discretisation method.
Resumo:
The objective of this article is to find out the influence of the parameters of the ARIMA-GARCH models in the prediction of artificial neural networks (ANN) of the feed forward type, trained with the Levenberg-Marquardt algorithm, through Monte Carlo simulations. The paper presents a study of the relationship between ANN performance and ARIMA-GARCH model parameters, i.e. the fact that depending on the stationarity and other parameters of the time series, the ANN structure should be selected differently. Neural networks have been widely used to predict time series and their capacity for dealing with non-linearities is a normally outstanding advantage. However, the values of the parameters of the models of generalized autoregressive conditional heteroscedasticity have an influence on ANN prediction performance. The combination of the values of the GARCH parameters with the ARIMA autoregressive terms also implies in ANN performance variation. Combining the parameters of the ARIMA-GARCH models and changing the ANN`s topologies, we used the Theil inequality coefficient to measure the prediction of the feed forward ANN.
Resumo:
IPTV is now offered by several operators in Europe, US and Asia using broadcast video over private IP networks that are isolated from Internet. IPTV services rely ontransmission of live (real-time) video and/or stored video. Video on Demand (VoD)and Time-shifted TV are implemented by IP unicast and Broadcast TV (BTV) and Near video on demand are implemented by IP multicast. IPTV services require QoS guarantees and can tolerate no more than 10-6 packet loss probability, 200 ms delay, and 50 ms jitter. Low delay is essential for satisfactory trick mode performance(pause, resume,fast forward) for VoD, and fast channel change time for BTV. Internet Traffic Engineering (TE) is defined in RFC 3272 and involves both capacity management and traffic management. Capacity management includes capacityplanning, routing control, and resource management. Traffic management includes (1)nodal traffic control functions such as traffic conditioning, queue management, scheduling, and (2) other functions that regulate traffic flow through the network orthat arbitrate access to network resources. An IPTV network architecture includes multiple networks (core network, metronetwork, access network and home network) that connects devices (super head-end, video hub office, video serving office, home gateway, set-top box). Each IP router in the core and metro networks implements some queueing and packet scheduling mechanism at the output link controller. Popular schedulers in IP networks include Priority Queueing (PQ), Class-Based Weighted Fair Queueing (CBWFQ), and Low Latency Queueing (LLQ) which combines PQ and CBWFQ.The thesis analyzes several Packet Scheduling algorithms that can optimize the tradeoff between system capacity and end user performance for the traffic classes. Before in the simulator FIFO,PQ,GPS queueing methods were implemented inside. This thesis aims to implement the LLQ scheduler inside the simulator and to evaluate the performance of these packet schedulers. The simulator is provided by ErnstNordström and Simulator was built in Visual C++ 2008 environmentand tested and analyzed in MatLab 7.0 under windows VISTA.
Resumo:
Verdelhan (2009) mostra que desejando-se explicar o comporta- mento do prêmio de risco nos mercados de títulos estrangeiros usando- se o modelo de formação externa de hábitos proposto por Campbell e Cochrane (1999) será necessário especi car o retorno livre de risco de equilíbrio de maneira pró-cíclica. Mostramos que esta especi cação só é possível sobre parâmetros de calibração implausíveis. Ainda no processo de calibração, para a maioria dos parâmetros razoáveis, a razão preço-consumo diverge. Entretanto, adotando a sugestão pro- posta por Verdelhan (2009) - de xar a função sensibilidade (st) no seu valor de steady-state durante a calibração e liberá-la apenas du- rante a simulação dos dados para se garantir taxas livre de risco pró- cíclicas - conseguimos encontrar um valor nito e bem comportado para a razão preço-consumo de equilíbrio e replicar o foward premium anom- aly. Desconsiderando possíveis inconsistências deste procedimento, so- bre retornos livres de risco pró-cíclicos, conforme sugerido por Wachter (2006), o modelo utilizado gera curvas de yields reais decrescentes na maturidade, independentemente do estado da economia - resultado que se opõe à literatura subjacente e aos dados reais sobre yields.
Resumo:
Verdelhan (2009) shows that if one is to explain the foreign exchange forward premium behavior using Campbell and Cochrane (1999)’s habit formation model one must specify it in such a way to generate pro-cyclical short term risk free rates. At the calibration procedure, we show that this is only possible in Campbell and Cochrane’s framework under implausible parameters specifications given that the price-consumption ratio diverges in almost all parameters sets. We, then, adopt Verdelhan’s shortcut of fixing the sensivity function λ(st) at its steady state level to attain a finite value for the price-consumption ratio and release it in the simulation stage to ensure pro-cyclical risk free rates. Beyond the potential inconsistencies that such procedure may generate, as suggested by Wachter (2006), with procyclical risk free rates the model generates a downward sloped real yield curve, which is at odds with the data.
Resumo:
Mit der Erweiterung des Elektronenbeschleunigers MAMI um eine dritte Stufe ist es möglich geworden, am Institut für Kernphysik Teilchen mit offener Strangeness zu produzieren. Für deren Nachweis ist die Drei-Spektrometeranlage der Kollaboration A1 um das von der GSI in Darmstadt übernommene KAOS-Spektrometer erweitert worden. Untersucht wird damit die elementare Reaktion p(e,e' K+)Lambda/Sigma0 wobei das auslaufende Elektron und das Kaon nachgewiesen werden müssen. Wird als Target nicht Wasserstoff verwendet, besteht die Möglichkeit dass sich ein Hyperkern bildet. Spektroskopische Untersuchungen an diesen bieten die Möglichkeit das Potential von Hyperonen in Atomkernen und die Hyperon-Nukleon-Wechselwirkung zu untersuchen. Aufgrund der hervorragenden Strahlqualität bei der Elektroproduktion können hier Massenauflösungen von einigen hundert keV/c² erreicht werden. Mit Hilfe von GEANT4 wurden die Detektoren und die Abbildungseigenschaften des Spektrometers simuliert. Geeignete Ereignisgeneratoren wurden implementiert. Es wurde untersucht, wie mögliche Treffermuster in den Detektoren aussehen, die von einem Trigger auf FPGA-Basis selektiert werden müssen. Ebenso konnte hieraus eine erste Abbildung der Spurkoordinaten auf die Targetkoordinaten und den Teilchenimpuls gewonnen werden. Für das Hyperkernprogramm muss KAOS unter 0° Vorwärtsrichung betrieben werden und der Primärstrahl mit Hilfe einer Schikane durch den Dipol gelenkt werden. Die Simulation zeigt hier eine nur moderate Erhöhung der Strahlenbelastung, vor allem im Bereich des Strahlfängers. Somit ist es möglich, KAOS als doppelseitiges Spektrometer in der Spektrometerhalle zu betreiben. Im Rahmen dieser Arbeit wurden die für sämtliche Detektoren nötige Auslese- und Steuerungselektronik in das vorhandene Datenerfassungssystem und das Steuerungssystem eingebunden. In zwei Strahlzeiten im Herbst 2008 wurden Kaonen im Winkelbereich von 20°-40° mit Impulsen zwischen 400MeV/c und 600MeV/c nachgewiesen. Die aus der Simulation gewonnenen Daten zum Trigger und zur Abbildung kamen zum Einsatz. Es konnte die für eine gute Teilchenidentifikation nötige Zeitauflösung von ca. 1ns FWHM erreicht werden. Die erreichte Winkel- und Impulsauflösung war ausreichend um Lambda und Sigma0-Hyperonen im Spektrum der fehlenden Masse leicht trennen zu können.
Resumo:
In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.
Resumo:
Forward-looking ground penetrating radar shows promise for detection of improvised explosive devices in active war zones. Because of certain insurmountable physical limitations, post-processing algorithm development is the most popular research topic in this field. One such investigative avenue explores the worthiness of frequency analysis during data post-processing. Using the finite difference time domain numerical method, simulations are run to test both mine and clutter frequency response. Mines are found to respond strongest at low frequencies and cause periodic changes in ground penetrating radar frequency results. These results are called into question, however, when clutter, a phenomenon generally known to be random, is also found to cause periodic frequency effects. Possible causes, including simulation inaccuracy, are considered. Although the clutter models used are found to be inadequately random, specular reflections of differing periodicity are found to return from both the mine and the ground. The presence of these specular reflections offers a potential alternative method of determining a mine’s presence.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.