918 resultados para Time domain simulation tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, there is a visible trend for products/services which demand seamless integration of cellular networks, WLANs and WPANs. This is a strong indication for the inclusion of high speed short range wireless technology in future applications. In this context UWB radio has a significant role to play as an extension/complement to existing cellular/access technology. In the present work, three major types of ultra wide band planar antennas are investigated: Monopole and Slot. Three novel compact UWB antennas, suitable for poratble applications, are designed and characterized, namely 1) Ground modified monopole 2) Serrated monopole 3) Triangular slot The performance of these designs have been studied using standard simulation tools used in industry/academia and they have been experimentally verified. Antenna design guidelines are also deduced by accounting the resonances in each structure. In addition to having compact sized, high efficiency and broad bandwidth antennas, one of the major criterion in the design of impulse-UWB systems have been the transmission of narrow band pulses with minimum distortion. The key challenge is not only to design a broad band antenna with constant and stable gain but to maintain a flat group delay or linear phase response in the frequency domain or excellent transient response in time domain. One of the major contributions of the thesis lies in the analysis of the frequency and timedomain response of the designed UWB antennas to confirm their suitability for portable pulsed-UWB systems. Techniques to avoid narrowband interference by engraving narrow slot resonators on the antenna is also proposed and their effect on a nano-second pulse have been investigated

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research examines dynamics associated with new representational technologies in complex organizations through a study of the use of a Single Model Environment, prototyping and simulation tools in the mega-project to construct Terminal 5 at Heathrow Airport, London. The ambition of the client, BAA. was to change industrial practices reducing project costs and time to delivery through new contractual arrangements and new digitally-enabled collaborative ways of working. The research highlights changes over time and addresses two areas of 'turbulence' in the use of: 1) technologies, where there is a dynamic tension between desires to constantly improve, change and update digital technologies and the need to standardise practices, maintaining and defending the overall integrity of the system; and 2) representations, where dynamics result from the responsibilities and liabilities associated with sharing of digital representations and a lack of trust in the validity of data from other firms. These dynamics are tracked across three stages of this well-managed and innovative project and indicate the generic need to treat digital infrastructure as an ongoing strategic issue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter aims to provide an overview of building simulation in a theoretical and practical context. The following sections demonstrate the importance of simulation programs at a time when society is shifting towards a low carbon future and the practice of sustainable design becomes mandatory. The initial sections acquaint the reader with basic terminology and comment on the capabilities and categories of simulation tools before discussing the historical development of programs. The main body of the chapter considers the primary benefits and users of simulation programs, looks at the role of simulation in the construction process and examines the validity and interpretation of simulation results. The latter half of the chapter looks at program selection and discusses software capability, product characteristics, input data and output formats. The inclusion of a case study demonstrates the simulation procedure and key concepts. Finally, the chapter closes with a sight into the future, commenting on the development of simulation capability, user interfaces and how simulation will continue to empower building professionals as society faces new challenges in a rapidly changing landscape.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-carrier (SC) block transmission with frequency-domain equalisation (FDE) offers a viable transmission technology for combating the adverse effects of long dispersive channels encountered in high-rate broadband wireless communication systems. However, for high bandwidthefficiency and high power-efficiency systems, the channel can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such nonlinear Hammerstein channels, the standard SC-FDE scheme no longer works. This paper advocates a complex-valued (CV) B-spline neural network based nonlinear SC-FDE scheme for Hammerstein channels. Specifically, We model the nonlinear HPA, which represents the CV static nonlinearity of the Hammerstein channel, by a CV B-spline neural network, and we develop two efficient alternating least squares schemes for estimating the parameters of the Hammerstein channel, including both the channel impulse response coefficients and the parameters of the CV B-spline model. We also use another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse B-spline neural network model obtained in time domain. Extensive simulation results are included to demonstrate the effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A practical single-carrier (SC) block transmission with frequency domain equalisation (FDE) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such Hammerstein channels, the standard SC-FDE scheme no longer works. We propose a novel Bspline neural network based nonlinear SC-FDE scheme for Hammerstein channels. In particular, we model the nonlinear HPA, which represents the complex-valued static nonlinearity of the Hammerstein channel, by two real-valued B-spline neural networks, one for modelling the nonlinear amplitude response of the HPA and the other for the nonlinear phase response of the HPA. We then develop an efficient alternating least squares algorithm for estimating the parameters of the Hammerstein channel, including the channel impulse response coefficients and the parameters of the two B-spline models. Moreover, we also use another real-valued B-spline neural network to model the inversion of the HPA’s nonlinear amplitude response, and the parameters of this inverting B-spline model can be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse Bspline neural network model obtained in time domain. The effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels is demonstrated in a simulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate the retinal nerve fiber layer measurements with time-domain (TD) and spectral-domain (SD) optical coherence tomography (OCT), and to test the diagnostic ability of both technologies in glaucomatous patients with asymmetric visual hemifield loss. Methods: 36 patients with primary open-angle glaucoma with visual field loss in one hemifield (affected) and absent loss in the other (non-affected), and 36 age-matched healthy controls had the study eye imaged with Stratus-OCT (Carl Zeiss Meditec Inc., Dublin, California, USA) and 3 D OCT-1000 (Topcon, Tokyo, Japan). Peripapillary retinal nerve fiber layer measurements and normative classification were recorded. Total deviation values were averaged in each hemifield (hemifield mean deviation) for each subject. Visual field and retinal nerve fiber layer "asymmetry indexes" were calculated as the ratio between affected versus non-affected hemifields and corresponding hemiretinas. Results: Retinal nerve fiber layer measurements in non-affected hemifields (mean [SD] 87.0 [17.1] mu m and 84.3 [20.2] mu m, for TD and SD-OCT, respectively) were thinner than in controls (119.0 [12.2] mu m and 117.0 [17.7] mu m, P<0.001). The optical coherence tomography normative database classified 42% and 67% of hemiretinas corresponding to non-affected hemifields as abnormal in TD and SD-OCT, respectively (P=0.01). Retinal nerve fiber layer measurements were consistently thicker with TD compared to SD-OCT. Retinal nerve fiber layer thickness asymmetry index was similar in TD (0.76 [0.17]) and SD-OCT (0.79 [0.12]) and significantly greater than the visual field asymmetry index (0.36 [0.20], P<0.001). Conclusions: Normal hemifields of glaucoma patients had thinner retinal nerve fiber layer than healthy eyes, as measured by TD and SD-OCT. Retinal nerve fiber layer measurements were thicker with TD than SD-OCT. SD-OCT detected abnormal retinal nerve fiber layer thickness more often than TD-OCT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time domain analysis of electroencephalography (EEG) can identify subsecond periods of quasi-stable brain states. These so-called microstates assumingly correspond to basic units of cognition and emotion. On the other hand, Global Field Synchronization (GFS) is a frequency domain measure to estimate functional synchronization of brain processes on a global level for each EEG frequency band [Koenig, T., Lehmann, D., Saito, N., Kuginuki, T., Kinoshita, T., Koukkou, M., 2001. Decreased functional connectivity of EEG theta-frequency activity in first-episode, neuroleptic-naive patients with schizophrenia: preliminary results. Schizophr Res. 50, 55-60.]. Using these time and frequency domain analyzes, several previous studies reported shortened microstate duration in specific microstate classes and decreased GFS in theta band in drug naïve schizophrenia compared to controls. The purpose of this study was to investigate changes of these EEG parameters after drug treatment in drug naïve schizophrenia. EEG analysis was performed in 21 drug-naive patients and 21 healthy controls. 14 patients were reevaluated 2-8 weeks (mean 4.3) after the initiation of drug administration. The results extended findings of treatment effect on brain functions in schizophrenia, and imply that shortened duration of specific microstate classes seems a state marker especially in patients with later neuroleptic responsive, while lower theta GFS seems a state-related phenomenon and that higher gamma GFS is a trait like phenomenon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the architecture and the methods used to dynamically simulate the sea backscatter of an airborne radar operating in a medium repetition frequency mode (MPRF). It offers a method of generating a sea backscatter signal which fulfills the intensity statistics of real clutter in time domain, spatial correlation and local Doppler spectrum of real data. Three antenna channels (sum, guard and difference) and their cross-correlation properties are simulated. The objective of this clutter generator is to serve as the signal source for the simulation of complex airborne pulsed radar signal processors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What is the time-optimal way of using a set of control Hamiltonians to obtain a desired interaction? Vidal, Hammerer, and Cirac [Phys. Rev. Lett. 88, 237902 (2002)] have obtained a set of powerful results characterizing the time-optimal simulation of a two-qubit quantum gate using a fixed interaction Hamiltonian and fast local control over the individual qubits. How practically useful are these results? We prove that there are two-qubit Hamiltonians such that time-optimal simulation requires infinitely many steps of evolution, each infinitesimally small, and thus is physically impractical. A procedure is given to determine which two-qubit Hamiltonians have this property, and we show that almost all Hamiltonians do. Finally, we determine some bounds on the penalty that must be paid in the simulation time if the number of steps is fixed at a finite number, and show that the cost in simulation time is not too great.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The finding that Pareto distributions are adequate to model Internet packet interarrival times has motivated the proposal of methods to evaluate steady-state performance measures of Pareto/D/1/k queues. Some limited analytical derivation for queue models has been proposed in the literature, but their solutions are often of a great mathematical challenge. To overcome such limitations, simulation tools that can deal with general queueing system must be developed. Despite certain limitations, simulation algorithms provide a mechanism to obtain insight and good numerical approximation to parameters of queues. In this work, we give an overview of some of these methods and compare them with our simulation approach, which are suited to solve queues with Generalized-Pareto interarrival time distributions. The paper discusses the properties and use of the Pareto distribution. We propose a real time trace simulation model for estimating the steady-state probability showing the tail-raising effect, loss probability, delay of the Pareto/D/1/k queue and make a comparison with M/D/1/k. The background on Internet traffic will help to do the evaluation correctly. This model can be used to study the long- tailed queueing systems. We close the paper with some general comments and offer thoughts about future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^