958 resultados para simulator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of embedded control systems for a Hybrid Electric Vehicle (HEV) is a challenging task due to the multidisciplinary nature of HEV powertrain and its complex structures. Hardware-In-the-Loop (HIL) simulation provides an open and convenient environment for the modeling, prototyping, testing and analyzing HEV control systems. This thesis focuses on the development of such a HIL system for the hybrid electric vehicle study. The hardware architecture of the HIL system, including dSPACE eDrive HIL simulator, MicroAutoBox II and MotoTron Engine Control Module (ECM), is introduced. Software used in the system includes dSPACE Real-Time Interface (RTI) blockset, Automotive Simulation Models (ASM), Matlab/Simulink/Stateflow, Real-time Workshop, ControlDesk Next Generation, ModelDesk and MotoHawk/MotoTune. A case study of the development of control systems for a single shaft parallel hybrid electric vehicle is presented to summarize the functionality of this HIL system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ZnO has proven to be a multifunctional material with important nanotechnological applications. ZnO nanostructures can be grown in various forms such as nanowires, nanorods, nanobelts, nanocombs etc. In this work, ZnO nanostructures are grown in a double quartz tube configuration thermal Chemical Vapor Deposition (CVD) system. We focus on functionalized ZnO Nanostructures by controlling their structures and tuning their properties for various applications. The following topics have been investigated: 1. We have fabricated various ZnO nanostructures using a thermal CVD technique. The growth parameters were optimized and studied for different nanostructures. 2. We have studied the application of ZnO nanowires (ZnONWs) for field effect transistors (FETs). Unintentional n-type conductivity was observed in our FETs based on as-grown ZnO NWs. We have then shown for the first time that controlled incorporation of hydrogen into ZnO NWs can introduce p-type characters to the nanowires. We further found that the n-type behaviors remained, leading to the ambipolar behaviors of hydrogen incorporated ZnO NWs. Importantly, the detected p- and n- type behaviors are stable for longer than two years when devices were kept in ambient conditions. All these can be explained by an ab initio model of Zn vacancy-Hydrogen complexes, which can serve as the donor, acceptors, or green photoluminescence quencher, depend on the number of hydrogen atoms involved. 3. Next ZnONWs were tested for electron field emission. We focus on reducing the threshold field (Eth) of field emission from non-aligned ZnO NWs. As encouraged by our results on enhancing the conductivity of ZnO NWs by hydrogen annealing described in Chapter 3, we have studied the effect of hydrogen annealing for improving field emission behavior of our ZnO NWs. We found that optimally annealed ZnO NWs offered much lower threshold electric field and improved emission stability. We also studied field emission from ZnO NWs at moderate vacuum levels. We found that there exists a minimum Eth as we scale the threshold field with pressure. This behavior is explained by referring to Paschen’s law. 4. We have studied the application of ZnO nanostructures for solar energy harvesting. First, as-grown and (CdSe) ZnS QDs decorated ZnO NBs and ZnONWs were tested for photocurrent generation. All these nanostructures offered fast response time to solar radiation. The decoration of QDs decreases the stable current level produced by ZnONWs but increases that generated by NBs. It is possible that NBs offer more stable surfaces for the attachment of QDs. In addition, our results suggests that performance degradation of solar cells made by growing ZnO NWs on ITO is due to the increase in resistance of ITO after the high temperature growth process. Hydrogen annealing also improve the efficiency of the solar cells by decreasing the resistance of ITO. Due to the issues on ITO, we use Ni foil as the growth substrates. Performance of solar cells made by growing ZnO NWs on Ni foils degraded after Hydrogen annealing at both low (300 °C) and high (600 °C) temperatures since annealing passivates native defects in ZnONWs and thus reduce the absorption of visible spectra from our solar simulator. Decoration of QDs improves the efficiency of such solar cells by increasing absorption of light in the visible region. Using a better electrolyte than phosphate buffer solution (PBS) such as KI also improves the solar cell efficiency. 5. Finally, we have attempted p-type doping of ZnO NWs using various growth precursors including phosphorus pentoxide, sodium fluoride, and zinc fluoride. We have also attempted to create p-type carriers via introducing interstitial fluorine by annealing ZnO nanostructures in diluted fluorine gas. In brief, we are unable to reproduce the growth of reported p-type ZnO nanostructures. However; we have identified the window of temperature and duration of post-growth annealing of ZnO NWs in dilute fluorine gas which leads to suppression of native defects. This is the first experimental effort on post-growth annealing of ZnO NWs in dilute fluorine gas although this has been suggested by a recent theory for creating p-type semiconductors. In our experiments the defect band peak due to native defects is found to decrease by annealing at 300 °C for 10 – 30 minutes. One of the major future works will be to determine the type of charge carriers in our annealed ZnONWs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulations of forest stand dynamics in a modelling framework including Forest Vegetation Simulator (FVS) are diameter driven, thus the diameter or basal area increment model needs a special attention. This dissertation critically evaluates diameter or basal area increment models and modelling approaches in the context of the Great Lakes region of the United States and Canada. A set of related studies are presented that critically evaluate the sub-model for change in individual tree basal diameter used in the Forest Vegetation Simulator (FVS), a dominant forestry model in the Great Lakes region. Various historical implementations of the STEMS (Stand and Tree Evaluation and Modeling System) family of diameter increment models, including the current public release of the Lake States variant of FVS (LS-FVS), were tested for the 30 most common tree species using data from the Michigan Forest Inventory and Analysis (FIA) program. The results showed that current public release of the LS-FVS diameter increment model over-predicts 10-year diameter increment by 17% on average. Also the study affirms that a simple adjustment factor as a function of a single predictor, dbh (diameter at breast height) used in the past versions, provides an inadequate correction of model prediction bias. In order to re-engineer the basal diameter increment model, the historical, conceptual and philosophical differences among the individual tree increment model families and their modelling approaches were analyzed and discussed. Two underlying conceptual approaches toward diameter or basal area increment modelling have been often used: the potential-modifier (POTMOD) and composite (COMP) approaches, which are exemplified by the STEMS/TWIGS and Prognosis models, respectively. It is argued that both approaches essentially use a similar base function and neither is conceptually different from a biological perspective, even though they look different in their model forms. No matter what modelling approach is used, the base function is the foundation of an increment model. Two base functions – gamma and Box-Lucas – were identified as candidate base functions for forestry applications. The results of a comparative analysis of empirical fits showed that quality of fit is essentially similar, and both are sufficiently detailed and flexible for forestry applications. The choice of either base function in order to model diameter or basal area increment is dependent upon personal preference; however, the gamma base function may be preferred over the Box-Lucas, as it fits the periodic increment data in both a linear and nonlinear composite model form. Finally, the utility of site index as a predictor variable has been criticized, as it has been widely used in models for complex, mixed species forest stands though not well suited for this purpose. An alternative to site index in an increment model was explored, using site index and a combination of climate variables and Forest Ecosystem Classification (FEC) ecosites and data from the Province of Ontario, Canada. The results showed that a combination of climate and FEC ecosites variables can replace site index in the diameter increment model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation titled "Driver Safety in Far-side and Far-oblique Crashes" presents a novel approach to assessing vehicle cockpit safety by integrating Human Factors and Applied Mechanics. The methodology of this approach is aimed at improving safety in compact mobile workspaces such as patrol vehicle cockpits. A statistical analysis performed using Michigan state's traffic crash data to assess various contributing factors that affect the risk of severe driver injuries showed that the risk was greater for unrestrained drivers (OR=3.38, p<0.0001) and for incidents involving front and far-side crashes without seatbelts (OR=8.0 and 23.0 respectively, p<0.005). Statistics also showed that near-side and far-side crashes pose similar threat to driver injury severity. A Human Factor survey was conducted to assess various Human-Machine/Human-Computer Interaction aspects in patrol vehicle cockpits. Results showed that tasks requiring manual operation, especially the usage of laptop, would require more attention and potentially cause more distraction. A vehicle survey conducted to evaluate ergonomics-related issues revealed that some of the equipment was in airbag deployment zones. In addition, experiments were conducted to assess the effects on driver distraction caused by changing the position of in-car accessories. A driving simulator study was conducted to mimic HMI/HCI in a patrol vehicle cockpit (20 subjects, average driving experience = 5.35 years, s.d. = 1.8). It was found that the mounting locations of manual tasks did not result in a significant change in response times. Visual displays resulted in response times less than 1.5sec. It can also be concluded that the manual task was equally distracting regardless of mounting positions (average response time was 15 secs). Average speeds and lane deviations did not show any significant results. Data from 13 full-scale sled tests conducted to simulate far-side impacts at 70 PDOF and 40 PDOF was used to analyze head injuries and HIC/AIS values. It was found that accelerations generated by the vehicle deceleration alone were high enough to cause AIS 3 - AIS 6 injuries. Pretensioners could mitigated injuries only in 40 PDOF (oblique) impacts but are useless in 70 PDOF impacts. Seat belts were ineffective in protecting the driver's head from injuries. Head would come in contact with the laptop during a far-oblique (40 PDOF) crash and far-side door for an angle-type crash (70 PDOF). Finite Element analysis head-laptop impact interaction showed that the contact velocity was the most crucial factor in causing a severe (and potentially fatal) head injury. Results indicate that no equipment may be mounted in driver trajectory envelopes. A very narrow band of space is left in patrol vehicles for installation of manual-task equipment to be both safe and ergonomic. In case of a contact, the material stiffness and damping properties play a very significant role in determining the injury outcome. Future work may be done on improving the interiors' material properties to better absorb and dissipate kinetic energy of the head. The design of seat belts and pretensioners may also be seen as an essential aspect to be further improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bluetooth wireless technology is a robust short-range communications system designed for low power (10 meter range) and low cost. It operates in the 2.4 GHz Industrial Scientific Medical (ISM) band and it employs two techniques for minimizing interference: a frequency hopping scheme which nominally splits the 2.400 - 2.485 GHz band in 79 frequency channels and a time division duplex (TDD) scheme which is used to switch to a new frequency channel on 625 μs boundaries. During normal operation a Bluetooth device will be active on a different frequency channel every 625 μs, thus minimizing the chances of continuous interference impacting the performance of the system. The smallest unit of a Bluetooth network is called a piconet, and can have a maximum of eight nodes. Bluetooth devices must assume one of two roles within a piconet, master or slave, where the master governs quality of service and the frequency hopping schedule within the piconet and the slave follows the master’s schedule. A piconet must have a single master and up to 7 active slaves. By allowing devices to have roles in multiple piconets through time multiplexing, i.e. slave/slave or master/slave, the Bluetooth technology allows for interconnecting multiple piconets into larger networks called scatternets. The Bluetooth technology is explored in the context of enabling ad-hoc networks. The Bluetooth specification provides flexibility in the scatternet formation protocol, outlining only the mechanisms necessary for future protocol implementations. A new protocol for scatternet formation and maintenance - mscat - is presented and its performance is evaluated using a Bluetooth simulator. The free variables manipulated in this study include device activity and the probabilities of devices performing discovery procedures. The relationship between the role a device has in the scatternet and it’s probability of performing discovery was examined and related to the scatternet topology formed. The results show that mscat creates dense network topologies for networks of 30, 50 and 70 nodes. The mscat protocol results in approximately a 33% increase in slaves/piconet and a reduction of approximately 12.5% of average roles/node. For 50 node scenarios the set of parameters which creates the best determined outcome is unconnected node inquiry probability (UP) = 10%, master node inquiry probability (MP) = 80% and slave inquiry probability (SP) = 40%. The mscat protocol extends the Bluetooth specification for formation and maintenance of scatternets in an ad-hoc network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The physics of the operation of singe-electron tunneling devices (SEDs) and singe-electron tunneling transistors (SETs), especially of those with multiple nanometer-sized islands, has remained poorly understood in spite of some intensive experimental and theoretical research. This computational study examines the current-voltage (IV) characteristics of multi-island single-electron devices using a newly developed multi-island transport simulator (MITS) that is based on semi-classical tunneling theory and kinetic Monte Carlo simulation. The dependence of device characteristics on physical device parameters is explored, and the physical mechanisms that lead to the Coulomb blockade (CB) and Coulomb staircase (CS) characteristics are proposed. Simulations using MITS demonstrate that the overall IV characteristics in a device with a random distribution of islands are a result of a complex interplay among those factors that affect the tunneling rates that are fixed a priori (e.g. island sizes, island separations, temperature, gate bias, etc.), and the evolving charge state of the system, which changes as the source-drain bias (VSD) is changed. With increasing VSD, a multi-island device has to overcome multiple discrete energy barriers (up-steps) before it reaches the threshold voltage (Vth). Beyond Vth, current flow is rate-limited by slow junctions, which leads to the CS structures in the IV characteristic. Each step in the CS is characterized by a unique distribution of island charges with an associated distribution of tunneling probabilities. MITS simulation studies done on one-dimensional (1D) disordered chains show that longer chains are better suited for switching applications as Vth increases with increasing chain length. They are also able to retain CS structures at higher temperatures better than shorter chains. In sufficiently disordered 2D systems, we demonstrate that there may exist a dominant conducting path (DCP) for conduction, which makes the 2D device behave as a quasi-1D device. The existence of a DCP is sensitive to the device structure, but is robust with respect to changes in temperature, gate bias, and VSD. A side gate in 1D and 2D systems can effectively control Vth. We argue that devices with smaller island sizes and narrower junctions may be better suited for practical applications, especially at room temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To determine stiffness and load-displacement curves as a biomechanical response to applied torsion and shear forces in cadaveric canine lumbar and lumbosacral specimens. STUDY DESIGN: Biomechanical study. ANIMALS: Caudal lumbar and lumbosacral functional spine units (FSU) of nonchondrodystrophic large-breed dogs (n=31) with radiographically normal spines. METHODS: FSU from dogs without musculoskeletal disease were tested in torsion in a custom-built spine loading simulator with 6 degrees of freedom, which uses orthogonally mounted electric motors to apply pure axial rotation. For shear tests, specimens were mounted to a custom-made shear-testing device, driven by a servo hydraulic testing machine. Load-displacement curves were recorded for torsion and shear. RESULTS: Left and right torsion stiffness was not different within each FSU level; however, torsional stiffness of L7-S1 was significantly smaller compared with lumbar FSU (L4-5-L6-7). Ventral/dorsal stiffness was significantly different from lateral stiffness within an individual FSU level for L5-6, L6-7, and L7-S1 but not for L4-5. When the data from 4 tested shear directions from the same specimen were pooled, level L5-6 was significantly stiffer than L7-S1. CONCLUSIONS: Increased range of motion of the lumbosacral joint is reflected by an overall decreased shear and rotational stiffness at the lumbosacral FSU. CLINICAL RELEVANCE: Data from dogs with disc degeneration have to be collected, analyzed, and compared with results from our chondrodystrophic large-breed dogs with radiographically normal spines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMN) have proven to be a key technology for increased network coverage of Internet infrastructures. The development process for new protocols and architectures in the area of WMN is typically split into evaluation by network simulation and testing of a prototype in a test-bed. Testing a prototype in a real test-bed is time-consuming and expensive. Irrepressible external interferences can occur which makes debugging difficult. Moreover, the test-bed usually supports only a limited number of test topologies. Finally, mobility tests are impractical. Therefore, we propose VirtualMesh as a new testing architecture which can be used before going to a real test-bed. It provides instruments to test the real communication software including the network stack inside a controlled environment. VirtualMesh is implemented by capturing real traffic through a virtual interface at the mesh nodes. The traffic is then redirected to the network simulator OMNeT++. In our experiments, VirtualMesh has proven to be scalable and introduces moderate delays. Therefore, it is suitable for predeployment testing of communication software for WMNs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document corresponds to the tutorial on realistic neural modeling given by David Beeman at WAM-BAMM*05, the first annual meeting of the World Association of Modelers (WAM) Biologically Accurate Modeling Meeting (BAMM) on March 31, 2005 in San Antonio, TX. Part I - Introduction to Realistic Neural Modeling for the Beginner: This is a general overview and introduction to compartmental cell modeling and realistic network simulation for the beginner. Although examples are drawn from GENESIS simulations, the tutorial emphasizes the general modeling approach, rather than the details of using any particular simulator. Part II - Getting Started with Modeling Using GENESIS: This builds upon the background of Part I to describe some details of how this approach is used to construct cell and network simulations in GENESIS. It serves as an introduction and roadmap to the extended hands-on GENESIS Modeling Tutorial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe four recent additions to NEURON's suite of graphical tools that make it easier for users to create and manage models: an enhancement to the Channel Builder that facilitates the specification and efficient simulation of stochastic channel models

Relevância:

10.00% 10.00%

Publicador:

Resumo:

P-GENESIS is an extension to the GENESIS neural simulator that allows users to take advantage of parallel machines to speed up the simulation of their network models or concurrently simulate multiple models. P-GENESIS adds several commands to the GENESIS script language that let a script running on one processor execute remote procedure calls on other processors, and that let a script synchronize its execution with the scripts running on other processors. We present here some brief comments on the mechanisms underlying parallel script execution. We also offer advice on parallelizing parameter searches, partitioning network models, and selecting suitable parallel hardware on which to run P-GENESIS.