936 resultados para Scheduler simulator
Resumo:
空间飞行器模拟件的设计是一个具有约束的多目标多准则优化问题。本文在建立空间飞行器模拟件参数优化的数学模型的基础上,将模糊多目标决策理论用于飞行器模拟件的结构参数优化,提出了一种新的模糊评价指数。结构参数优化的结果已经用于某试验系统。
Resumo:
Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.
Resumo:
Facing the problems met in studies on predominant hydrocarbon migration pathways, experiments and numerical simulating were done in this thesis work to discuss the migration mechanisms. The aim is to analyze quantitatively the pathway pattern in basin scale and to estimate the hydrocarbon loss on the pathway that offer useful information for confirming the potential hydrocarbon accumulation. Based on our understandings on hydrocarbon migration and the fluid dynamic theory, a series of migration experiments were designed to observe the phenomena where kerosene is used as draining phase driven only by buoyancy force that expulses pore water. These experiments allow to study the formation of migration pathways, the distribution of non-wetting oil along these pathways, and the re-utilizing of previously existing pathways marked by residual traces etc. The types of pattern for migration pathways may be characterized by a phase diagram using two dimensionless numbers: the capillary number and the Bond number. The NMR technique is used to measure the average saturation of residual oil within the pathways. Based our experiment works and percolation concept, a numerical simulation model were proposed and realized. This model is therefore called as BP (Buoyancy Percolation) simulator, since buoyancy is taken as the main driving force in hydrocarbon migration. To make sure that BP model is applicable to simulate the process of oil secondary migration, the experimental phenomena are compared with those simulated with BP model by fractal method, and the result is positive. After then, we use BP simulator to simulate the process of migration of oil in the porous media saturated with water at different scale. And the results seem similar to those cited in literatures. In addition, our software is applied in Paris basin to predict the pathway of hydrocarbon migration happened in the Middle Jurassic reservoirs. It is found that the results obtained with our BP model are generally agree with Hindle (1997) and Bekeles'(1999), but our simulated migration pathway pattern and migration direction seem more reasonable than theirs.
Resumo:
The technique of energy extraction using groundwater source heat pumps, as a sustainable way of low-grade thermal energy utilization, has widely been used since mid-1990's. Based on the basic theories of groundwater flow and heat transfer and by employing two analytic models, the relationship of the thermal breakthrough time for a production well with the effect factors involved is analyzed and the impact of heat transfer by means of conduction and convection, under different groundwater velocity conditions, on geo-temperature field is discussed.A mathematical model, coupling the equations for groundwater flow with those for heat transfer, was developed. The impact of energy mining using a single well system of supplying and returning water on geo-temperature field under different hydrogeological conditions, well structures, withdraw-and-reinjection rates, and natural groundwater flow velocities was quantitatively simulated using the finite difference simulator HST3D. Theoretical analyses of the simulated results were also made. The simulated results of the single well system indicate that neither the permeability nor the porosity of a homogeneous aquifer has significant effect on the temperature of the production segment provided that the production and injection capability of each well in the aquifers involved can meet the designed value. If there exists a lower permeable interlayer, compared with the main aquifer, between the production and injection segments, the temperature changes of the production segment will decrease. The thicker the interlayer and the lower the interlayer permeability, the longer the thermal breakthrough time of the production segment and the smaller the temperature changes of the production segment. According to the above modeling, it can also be found that with the increase of the aquifer thickness, the distance between the production and injection screens, and/or the regional groundwater flow velocity, and/or the decrease of the production-and-reinjection rate, the temperature changes of the production segment decline. For an aquifer of a constant thickness, continuously increase the screen lengths of production and injection segments may lead to the decrease of the distance between the production and injection screens, and the temperature changes of the production segment will increase, consequently.According to the simulation results of the single well system, the parameters, that can cause significant influence on heat transfer as well as geo-temperature field, were chosen for doublet system simulation. It is indicated that the temperature changes of the pumping well will decrease as the aquifer thickness, the distance between the well pair and/or the screen lengths of the doublet increase. In the case of a low permeable interlayer embedding in the main aquifer, if the screens of the pumping and the injection wells are installed respectively below and above the interlayer, the temperature changes of the pumping well will be smaller than that without the interlay. The lower the permeability of the interlayer, the smaller the temperature changes. The simulation results also indicate that the lower the pumping-and-reinjection rate, the greater the temperature changes of the pumping well. It can also be found that if the producer and the injector are chosen reasonably, the temperature changes of the pumping well will decline as the regional groundwater flow velocity increases. Compared with the case that the groundwater flow direction is perpendicular to the well pair, if the regional flow is directed from the pumping well to the injection well, the temperature changes of the pumping well is relatively smaller.Based on the above simulation study, a case history was conducted using the data from an operating system in Beijing. By means of the conceptual model and the mathematical model, a 3-D simulation model was developed and the hydrogeological parameters and the thermal properties were calibrated. The calibrated model was used to predict the evolution of the geo-temperature field for the next five years. The simulation results indicate that the calibrated model can represent the hydrogeological conditions and the nature of the aquifers. It can also be found that the temperature fronts in high permeable aquifers move very fast and the radiuses of temperature influence are large. Comparatively, the temperature changes in clay layers are smaller and there is an obvious lag of the temperature changes. According to the current energy mining load, the temperature of the pumping wells will increase by 0.7°C at the end of the next five years. The above case study may provide reliable base for the scientific management of the operating system studied.
Resumo:
Recognizing standard computational structures (cliches) in a program can help an experienced programmer understand the program. We develop a graph parsing approach to automating program recognition in which programs and cliches are represented in an attributed graph grammar formalism and recognition is achieved by graph parsing. In studying this approach, we evaluate our representation's ability to suppress many common forms of variation which hinder recognition. We investigate the expressiveness of our graph grammar formalism for capturing programming cliches. We empirically and analytically study the computational cost of our recognition approach with respect to two medium-sized, real-world simulator programs.
Resumo:
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
Resumo:
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.
Resumo:
Two kinds of process models have been used in programs that reason about change: Discrete and continuous models. We describe the design and implementation of a qualitative simulator, PEPTIDE, which uses both kinds of process models to predict the behavior of molecular energetic systems. The program uses a discrete process model to simulate both situations involving abrupt changes in quantities and the actions of small numbers of molecules. It uses a continuous process model to predict gradual changes in quantities. A novel technique, called aggregation, allows the simulator to switch between theses models through the recognition and summary of cycles. The flexibility of PEPTIDE's aggregator allows the program to detect cycles within cycles and predict the behavior of complex situations.
Resumo:
Low-Power and Lossy-Network (LLN) are usually composed of static nodes, but the increase demand for mobility in mobile robotic and dynamic environment raises the question how a routing protocol for low-power and lossy-networks such as (RPL) would perform if a mobile sink is deployed. In this paper we investigate and evaluate the behaviour of the RPL protocol in fixed and mobile sink environments with respect to different network metrics such as latency, packet delivery ratio (PDR) and energy consumption. Extensive simulation using instant Contiki simulator show significant performance differences between fixed and mobile sink environments. Fixed sink LLNs performed better in terms of average power consumption, latency and packet delivery ratio. The results demonstrated also that RPL protocol is sensitive to mobility and it increases the number of isolated nodes.
Resumo:
Lee M.H., Qualitative Modelling of Linear Networks in ECAD Applications, Expert Update, Vol. 3, Num. 2, pp23-32, BCS SGES, Summer 2000. Qualitative modeling of linear networks in ecad applications (1999) by M Lee Venue: Pages 146?152 of: Proceedings 13th international workshop on qualitative reasoning, QR ?99
Resumo:
Wydział Fizyki
Resumo:
Quadsim is an intermediate code simulator. It allows you to "run" programs that your compiler generates in intermediate code format. Its user interface is similar to most debuggers in that you can step through your program, instruction by instruction, set breakpoints, examine variable values, and so on. The intermediate code format used by Quadsim is that described in [Aho 86]. If your compiler generates intermediate code in this format, you will be able to take intermediate-code files generated by your compiler, load them into the simulator, and watch them "run." You are provided with functions that hide the internal representation of intermediate code. You can use these functions within your compiler to generate intermediate code files that can be read by the simulator. Quadsim was inspired and greatly influenced by [Aho 86]. The material in chapter 8 (Intermediate Code Generation) of [Aho 86] should be considered background material for users of Quadsim.
Resumo:
This paper describes an algorithm for scheduling packets in real-time multimedia data streams. Common to these classes of data streams are service constraints in terms of bandwidth and delay. However, it is typical for real-time multimedia streams to tolerate bounded delay variations and, in some cases, finite losses of packets. We have therefore developed a scheduling algorithm that assumes streams have window-constraints on groups of consecutive packet deadlines. A window-constraint defines the number of packet deadlines that can be missed in a window of deadlines for consecutive packets in a stream. Our algorithm, called Dynamic Window-Constrained Scheduling (DWCS), attempts to guarantee no more than x out of a window of y deadlines are missed for consecutive packets in real-time and multimedia streams. Using DWCS, the delay of service to real-time streams is bounded even when the scheduler is overloaded. Moreover, DWCS is capable of ensuring independent delay bounds on streams, while at the same time guaranteeing minimum bandwidth utilizations over tunable and finite windows of time. We show the conditions under which the total demand for link bandwidth by a set of real-time (i.e., window-constrained) streams can exceed 100% and still ensure all window-constraints are met. In fact, we show how it is possible to guarantee worst-case per-stream bandwidth and delay constraints while utilizing all available link capacity. Finally, we show how best-effort packets can be serviced with fast response time, in the presence of window-constrained traffic.
Resumo:
Routing protocols in wireless sensor networks (WSN) face two main challenges: first, the challenging environments in which WSNs are deployed negatively affect the quality of the routing process. Therefore, routing protocols for WSNs should recognize and react to node failures and packet losses. Second, sensor nodes are battery-powered, which makes power a scarce resource. Routing protocols should optimize power consumption to prolong the lifetime of the WSN. In this paper, we present a new adaptive routing protocol for WSNs, we call it M^2RC. M^2RC has two phases: mesh establishment phase and data forwarding phase. In the first phase, M^2RC establishes the routing state to enable multipath data forwarding. In the second phase, M^2RC forwards data packets from the source to the sink. Targeting hop-by-hop reliability, an M^2RC forwarding node waits for an acknowledgement (ACK) that its packets were correctly received at the next neighbor. Based on this feedback, an M^2RC node applies multiplicative-increase/additive-decrease (MIAD) to control the number of neighbors targeted by its packet broadcast. We simulated M^2RC in the ns-2 simulator and compared it to GRAB, Max-power, and Min-power routing schemes. Our simulations show that M^2RC achieves the highest throughput with at least 10-30% less consumed power per delivered report in scenarios where a certain number of nodes unexpectedly fail.
Resumo:
Modern cellular channels in 3G networks incorporate sophisticated power control and dynamic rate adaptation which can have a significant impact on adaptive transport layer protocols, such as TCP. Though there exists studies that have evaluated the performance of TCP over such networks, they are based solely on observations at the transport layer and hence have no visibility into the impact of lower layer dynamics, which are a key characteristic of these networks. In this work, we present a detailed characterization of TCP behavior based on cross-layer measurement of transport, as well as RF and MAC layer parameters. In particular, through a series of active TCP/UDP experiments and measurement of the relevant variables at all three layers, we characterize both, the wireless scheduler in a commercial CDMA2000 network and its impact on TCP dynamics. Somewhat surprisingly, our findings indicate that the wireless scheduler is mostly insensitive to channel quality and sector load over short timescales and is mainly affected by the transport layer data rate. Furthermore, we empirically demonstrate the impact of the wireless scheduler on various TCP parameters such as the round trip time, throughput and packet loss rate.