907 resultados para Network simulation
Resumo:
A radial basis function network (RBFN) circuit for function approximation is presented. Simulation and experimental results show that the network has good approximation capabilities. The RBFN was a squared hyperbolic secant with three adjustable parameters amplitude, width and center. To test the network a sinusoidal and sine function,vas approximated.
Resumo:
Networked control systems (NCSs) are distributed control systems in which the sensors, actuators, and controllers are physically separated and connected through an industrial network. The main challenge related to the development of NCSs is the degenerative effects caused by the inclusion of this communication network in the closed loop control. In order to mitigate these effects, co-simulation tools for NCS have been developed to study the network influence in the NCS. This paper presents a revision about co-simulation tools for NCS and the application of two of these tools for the design and evaluation of NCSs. The TrueTime and Jitterbug tools were used together to evaluate the main configuration parameter that affects the performance of CAN-based NCS and to verify the NCS quality of control under various timing conditions including different transmission period of messages and network delays. Therefore, the simulation results led to the conclusion that despite the transmission period of messages is the most significant factor among the analyzed in the design of NCS, its influence is related to the kind of system with greater effects in NCSs with fast dynamics.
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.
Resumo:
Consider a communication system in which a transmitter equipment sends fixed-size packets of data at a uniform rate to a receiver equipment. Consider also that these equipments are connected by a packet-switched network, which introduces a random delay to each packet. Here we propose an adaptive clock recovery scheme able of synchronizing the frequencies and the phases of these devices, within specified limits of precision. This scheme for achieving frequency and phase synchronization is based on measurements of the packet arrival times at the receiver, which are used to control the dynamics of a digital phase-locked loop. The scheme performance is evaluated via numerical simulations performed by using realistic parameter values. (C) 2011 Elsevier By. All rights reserved.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Resumo:
The Ph.D. thesis describes the simulations of different microwave links from the transmitter to the receiver intermediate-frequency ports, by means of a rigorous circuit-level nonlinear analysis approach coupled with the electromagnetic characterization of the transmitter and receiver front ends. This includes a full electromagnetic computation of the radiated far field which is used to establish the connection between transmitter and receiver. Digitally modulated radio-frequency drive is treated by a modulation-oriented harmonic-balance method based on Krylov-subspace model-order reduction to allow the handling of large-size front ends. Different examples of links have been presented: an End-to-End link simulated by making use of an artificial neural network model; the latter allows a fast computation of the link itself when driven by long sequences of the order of millions of samples. In this way a meaningful evaluation of such link performance aspects as the bit error rate becomes possible at the circuit level. Subsequently, a work focused on the co-simulation an entire link including a realistic simulation of the radio channel has been presented. The channel has been characterized by means of a deterministic approach, such as Ray Tracing technique. Then, a 2x2 multiple-input multiple-output antenna link has been simulated; in this work near-field and far-field coupling between radiating elements, as well as the environment factors, has been rigorously taken into account. Finally, within the scope to simulate an entire ultra-wideband link, the transmitting side of an ultrawideband link has been designed, and an interesting Front-End co-design technique application has been setup.
Resumo:
The study of protein fold is a central problem in life science, leading in the last years to several attempts for improving our knowledge of the protein structures. In this thesis this challenging problem is tackled by means of molecular dynamics, chirality and NMR studies. In the last decades, many algorithms were designed for the protein secondary structure assignment, which reveals the local protein shape adopted by segments of amino acids. In this regard, the use of local chirality for the protein secondary structure assignment was demonstreted, trying to correlate as well the propensity of a given amino acid for a particular secondary structure. The protein fold can be studied also by Nuclear Magnetic Resonance (NMR) investigations, finding the average structure adopted from a protein. In this context, the effect of Residual Dipolar Couplings (RDCs) in the structure refinement was shown, revealing a strong improvement of structure resolution. A wide extent of this thesis is devoted to the study of avian prion protein. Prion protein is the main responsible of a vast class of neurodegenerative diseases, known as Bovine Spongiform Encephalopathy (BSE), present in mammals, but not in avian species and it is caused from the conversion of cellular prion protein to the pathogenic misfolded isoform, accumulating in the brain in form of amiloyd plaques. In particular, the N-terminal region, namely the initial part of the protein, is quite different between mammal and avian species but both of them contain multimeric sequences called Repeats, octameric in mammals and hexameric in avians. However, such repeat regions show differences in the contained amino acids, in particular only avian hexarepeats contain tyrosine residues. The chirality analysis of avian prion protein configurations obtained from molecular dynamics reveals a high stiffness of the avian protein, which tends to preserve its regular secondary structure. This is due to the presence of prolines, histidines and especially tyrosines, which form a hydrogen bond network in the hexarepeat region, only possible in the avian protein, and thus probably hampering the aggregation.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.
Resumo:
In der vorliegenden Arbeit werdenMolekulardynamik-Simulationen zur Untersuchung derstatischen Eigenschaften von amorphenSiliziumdioxidoberflächen (Siliziumdioxid) durchgeführt. Da das von van Beest, Kramer und van Santen vorgeschlagene,sogenannte BKS-Potential für Bulksysteme optimiert wurde und an Oberflächen deutlichandere Ladungsverteilungenauftreten als im Bulk, ist die Anwendbarkeit diesesPotentials für Oberflächensystemefraglich. Aus diesem Grund haben wir untersucht, inwieweitsich die Oberflächeneigenschaften von Systemen, die mit Hilfe des BKS-Potentials äquilibriertwurden, durch ein Nachrelaxieren mit einer ab-initio-Simulation (Car-Parrinello-Methode)ändern. Mit Hilfe der Kombination aus BKS- und Car-Parrinello-Methode (CPMD)konnten wir feststellen, daß sich die Systeme aufgrund des Nachrelaxierens in z-Richtungweiter ausdehnen. Desweiteren zeigte sich insbesondere bei kleinen Ringen (kommen nur ander Oberfläche vor), daß es deutliche Abweichungen in den Geometrien (Atomabstände,Winkel usw.) zwischen der reinen BKS- und der kombinierten BKS-CPMD-Methode gibt. Anhand vonCPMD-Simulationen konnten wir zeigen, daß es durch die Wechselwirkung eines Wassermolekülsmit einem 2er-Ring zum Aufbrechen dieser Ringstruktur und zur Bildung von zweiSilanolgruppen (SiOH) kommt. Desweiteren stellten wir fest, daß es sich hierbei um eineexotherme Reaktion (Energiedifferenz 1.6 eV) handelt, für die eineEnergiebarriere von 1.1 eV überwunden werden muß. Ferner ergab sich, daß die an der Bildung des2er-Ringes beteiligten, stark deformierten Tetraeder nach dem Aufbrechen dieserRingstruktur eine nahezu ideale Tetraederform annehmen.
Resumo:
The objective of the Ph.D. thesis is to put the basis of an all-embracing link analysis procedure that may form a general reference scheme for the future state-of-the-art of RF/microwave link design: it is basically meant as a circuit-level simulation of an entire radio link, with – generally multiple – transmitting and receiving antennas examined by EM analysis. In this way the influence of mutual couplings on the frequency-dependent near-field and far-field performance of each element is fully accounted for. The set of transmitters is treated as a unique nonlinear system loaded by the multiport antenna, and is analyzed by nonlinear circuit techniques. In order to establish the connection between transmitters and receivers, the far-fields incident onto the receivers are evaluated by EM analysis and are combined by extending an available Ray Tracing technique to the link study. EM theory is used to describe the receiving array as a linear active multiport network. Link performances in terms of bit error rate (BER) are eventually verified a posteriori by a fast system-level algorithm. In order to validate the proposed approach, four heterogeneous application contexts are provided. A complete MIMO link design in a realistic propagation scenario is meant to constitute the reference case study. The second one regards the design, optimization and testing of various typologies of rectennas for power generation by common RF sources. Finally, the project and implementation of two typologies of radio identification tags, at X-band and V-band respectively. In all the cases the importance of an exhaustive nonlinear/electromagnetic co-simulation and co-design is demonstrated to be essential for any accurate system performance prediction.
Resumo:
The motor system can no longer be considered as a mere passive executive system of motor commands generated elsewhere in the brain. On the contrary, it is deeply involved in perceptual and cognitive functions and acts as an “anticipation device”. The present thesis investigates the anticipatory motor mechanisms occurring in two particular instances: i) when processing sensory events occurring within the peripersonal space (PPS); and ii) when perceiving and predicting others’actions. The first study provides evidence that PPS representation in humans modulates neural activity within the motor system, while the second demonstrates that the motor mapping of sensory events occurring within the PPS critically relies on the activity of the premotor cortex. The third study provides direct evidence that the anticipatory motor simulation of others’ actions critically relies on the activity of the anterior node of the action observation network (AON), namely the inferior frontal cortex (IFC). The fourth study, sheds light on the pivotal role of the left IFC in predicting the future end state of observed right-hand actions. Finally, the fifth study examines how the ability to predict others’ actions could be influenced by a reduction of sensorimotor experience due to the traumatic or congenital loss of a limb. Overall, the present work provides new insights on: i) the anticipatory mechanisms of the basic reactivity of the motor system when processing sensory events occurring within the PPS, and the same anticipatory motor mechanisms when perceiving others’ implied actions; ii) the functional connectivity and plasticity of premotor-motor circuits both during the motor mapping of sensory events occurring within the PPS and when perceiving others’ actions; and iii) the anticipatory mechanisms related to others’ actions prediction.
Resumo:
The central aim of this thesis work is the application and further development of a hybrid quantum mechanical/molecular mechanics (QM/MM) based approach to compute spectroscopic properties of molecules in complex chemical environments from electronic structure theory. In the framework of this thesis, an existing density functional theory implementation of the QM/MM approach is first used to calculate the nuclear magnetic resonance (NMR) solvent shifts of an adenine molecule in aqueous solution. The findings show that the aqueous solvation with its strongly fluctuating hydrogen bond network leads to specific changes in the NMR resonance lines. Besides the absolute values, also the ordering of the NMR lines changes under the influence of the solvating water molecules. Without the QM/MM scheme, a quantum chemical calculation could have led to an incorrect assignment of these lines. The second part of this thesis describes a methodological improvement of the QM/MM method that is designed for cases in which a covalent chemical bond crosses the QM/MM boundary. The development consists in an automatized protocol to optimize a so-called capping potential that saturates the electronic subsystem in the QM region. The optimization scheme is capable of tuning the parameters in such a way that the deviations of the electronic orbitals between the regular and the truncated (and "capped") molecule are minimized. This in turn results in a considerable improvement of the structural and spectroscopic parameters when computed with the new optimized capping potential within the QM/MM technique. This optimization scheme is applied and benchmarked on the example of truncated carbon-carbon bonds in a set of small test molecules. It turns out that the optimized capping potentials yield an excellent agreement of NMR chemical shifts and protonation energies with respect to the corresponding full molecules. These results are very promising, so that the application to larger biological complexes will significantly improve the reliability of the prediction of the related spectroscopic properties.