980 resultados para Simulation results
Resumo:
针对当前模糊隶属函数构造方法中存在的问题,提出一种构造模糊隶属函数方法.采用最小二乘法拟合离散数据来获得隶属函数.为减小拟合误差,采用了3项措施以达到预期目标.所构建的隶属函数,对任意输入物理量可直接得到其对应模糊语言变量的隶属度,从而有效避免专家指定隶属度的主观臆断性及不一致性.该方法简单、求解精度高,具有广泛适用性和较强的应用价值.仿真结果证实了该方法的有效性.
Resumo:
针对EMS型磁悬浮列车悬浮系统的非线性、迟滞性及模型不确定的特点,本文采用了模糊自适应整定PID控制技术来满足其对动态和静态性能的要求。仿真结果表明模糊自适应整定PID控制器学习精度高、收敛速度快、在系统同时存在磁悬浮系统参数的变化和负载扰动时.具有较强的鲁棒性和抗干扰能力。
Resumo:
Aiming at the character of Bohaii Sea area and the heterogeneity of fluvial facies reservoir, litho-geophysics experiments and integrated research of geophysical technologies are carried out. To deal with practical problems in oil fields of Bohai area, such as QHD32-6, Southern BZ25-1 and NP35-2 et al., technology of reservoir description based on seismic data and reservoir geophysical methods is built. In this dissertation, three points are emphasized: ①the integration of multidiscipline; ②the application of new methods and technologies; ③the integration of quiescent and dynamic data. At last, research of geology modeling and reservoir numerical simulation based on geophysical data are integrated. There are several innovative results and conclusion in this dissertation: (1)To deal with problems in shallow sea area where seismic data is the key data, a set of technologies for fine reservoir description based on seismic data in Bohai Sea area are built. All these technologies, including technologies of stratigraphic classification, sedimentary facies identification, structure fine characterization, reservoir description, fluid recognition and integration of geological modeling& reservoir numerical simulation, play an important role in the hydrocarbon exploration and development. In the research of lithology and hydrocarbon-bearing condition, petrophysical experiment is carried out. Outdoors inspection and experiment test data are integrated in seismic forward modeling& inversion research. Through the research, the seismic reflection rules of fluid in porosity are generated. Based on all the above research, seismic data is used to classify rock association, identify sedimentary facies belts and recognition hydrocarbon-bearing condition of reservoir. In this research, the geological meaning of geophysical information is more clear and the ambiguity of geophysical information is efficiently reduced, so the reliability in hydrocarbon forecasting is improved. The methods of multi-scales are developed in microfacies research aiming at the condition of shallow sea area in Bohai Sea: ① make the transformation from seismic information to sedimentary facies reality by discriminant analysis; ②in research of planar sedimentary facies, make microfacies research on seismic scale by technologies integration of seismic multi-attributes analysis& optimization, strata slicing and seismic waveform classification; ③descript the sedimentary facies distribution on scales below seismic resolution with the method of stochastic modeling. In the research of geological modeling and reservoir numerical simulation, the way of bilateral iteration between modeling and numerical simulation is carried out in the geological model correction. This process include several steps: ①make seismic forward modeling based on the reservoir numerical simulation results and geological models; ②get trend residual of forward modeling and real seismic data; ③make dynamic correction of the model according to the above trend residual. The modern integration technology of reservoir fine description research in Bohai Sea area, which is developed in this dissertation, is successfully used in (1)the reserve volume evaluation and development research in BZ25-1 oil field and (2)the tracing while drilling research in QHD32-6 oil field. These application researches show wide application potential in hydrocarbon exploration and development research in other oil fields.
Resumo:
The two major issues in mining industry are work safety and protection of ground environment when carrying on underground mining activities. Cut-and-fill mining method is increasingly applied in China owing to its advantages of controlling ground pressure and protecting the ground environment effectively. However, some cut-and-fill mines such as Jinchuan nickel mine which has big ore body, broken rock mass and high geostress have unique characteristics on the law of ground pressure and rock mass movement that distinguish from other mining methods. There are still many problems unknown and it is necessary for the further analysis. In this dissertation, vast field survey, geology trenching and relative data analysis are carried out. The distribution of ground fissures and the correlation of the fissures with the location of underground ore body is presented. Using of monitoring data by three-dimension fissure meter and GPS in Jinchuan Deposit Ⅱ, the rule of the surface deformation and the reason of ground fissures generation are analyzed. It is shown that the stress redistribution in surrounding rocks resulting from mining, the existence of the void space underground and the influence of on-going mining activities are three main reasons for the occurrence of ground fissures. Based on actual section planes of No.1 ore body, a large-scale 3D model is established. By this model, the complete process of excavation and filling is simulated and the law of rock mass movement and stability caused by Cut-and-fill Mining is studied. According to simulation results, it is concluded that the deformation of ground surface is still going on developing; the region of subsidence on the ground surface is similar with a circle; the area on the hanging wall side is larger than one on the lower wall side; the contour plots show the centre of subsidence lay on the hanging wall side and the position is near the ore body boundary of 1150m and 1250m where ore body is the thickest. Along strike-line of Jinchuan Deposit Ⅱ, the deformation at the middle of filling body is larger than that in the two sides. Because of the irregular ore body, stress concentrates at the boundary of ore body. With the process of excavation and filling, the high stress release and the stress focus disappear on the hanging wall side. The cut-and-fill mechanism is studied based on monitoring data and numerical simulation. The functions of filling body are discussed. In this dissertation, it is concluded that the stress of filling body is just 2MPa, but the stress of surrounding rock mass is 20MPa. We study the surface movement influenced by the elastic modulus of backfill. The minimal value of the elastic modulus of backfill which can guarantee the safety production of cut-and-fill mine is obtained. Finally, based on the real survey results of the horizontal ore layer and numerical simulation, it is indicated that the horizontal ore layer has destroyed. Key words: cut-and-filling mining, 3D numerical simulation, field monitoring, rock mass movement, cut-and-filling mechanism, the elastic modulus of backfill, the horizontal ore layer
Resumo:
With the development of both seismic theory and computer technology, numerical modeling technology of seismic wave has achieved great advancement during the past half century. The current methods under development include finite differentiation method (FDM), finite element method (FEM), pseudospectral method (PSM), integral equation method (IEM) and spectral element method (SEM). They exert their very important roles in every corner of seismology and seismic prospecting. Large quantity of researches towards spectral element method in the end of last century bring this method to a new era, which results in perfect solution of many difficult problems. However, parts of posterior works such as seismic migration and inversion which base on spectral element method have never been studied widely at least up to the present whereas are of importance to seismic imaging and seismic wave propagation. Based on previous work, this paper uses spectral element method to investigate the characteristics and laws of the seismic wave propagation in isotropic and anisotropic media. By thoroughly studying this high-accuracy method, we implement a kind of reverse-time pre- and post-stack migration based on SEM. In order to verify the validity of the SEM method, we have simulated the propagation of seismic wave in several different models. The simulation results show that: (1) spectral element method can be used to model any complex models and the computational results are comparable with the expected results and the analytic results; (2) the optimum accuracy can be achieved when the rank is between 4 and 9. When it is below 4, the dispersion may occur; and when it is above 9, the time step-length will be changed accordingly with the reducing space step-length in order to keep the computation stability. This will exponentially increase the computation time and at the same time the memory even if simulating the same media. This paper also applies explosive reflection surface imaging technology, time constancy principle of wave-filed extrapolation and least travetime raytracing technology of surface source to SEM pre- and post-stack migration of isotropic and anisotropic media. All imaging results derived by the above methods agree well with the real geological models and the position of interface and inflexions can also return to their right location well. This indicates that the method proposed in this paper is a kind of technology with high accuracy and robust stability. It can serve as an alternative method in real seismic data processing. All these work can boost the development of high-accuracy seismic imaging, and therefore have significant inference value.
Resumo:
Geological fluids are important components in the earth system. To study thephysical chemistry properties and the evolution of fluid system turns out to be one of the most challenging issues in geosciences. Besides the conventional experimental approaches and theoretical or semi-theoretical modeling, molecular level computer simulation(MLCS) emerges as an alternative tool to quantificationally study the physico-chemical properties of fluid under extreme conditions in order to find out the characteristics and interaction of geological fluids in and around earth. Based on our previous study of the intermolecular potential for pure H2O and thestrict evaluation of the competitive potential models for pure CH4 and the ab initio fitting potential surface across H2O-CH4 molecules in this study, we carried out more than two thousand molecular dynamics simulations for the PVTx properties of pure CH4 and the H2O-CH4 mixtures. Comparison of 1941 simulations with experimental PVT data for pure CH4 shows an average deviation of 0.96% and a maximum deviation of 2.82%. The comparison of the results of 519 simulations of the mixtures with the experimental measurements reveals that the PVTx properties of the H2O-CH4 mixtures generally agree with the extensive experimental data with an average deviation of 0.83% and 4% in maximum, which is equivalent to the experimental uncertainty. Moreover, the maximum deviation between the experimental data and the simulation results decreases to about 2% as temperature and pressure increase,indicating that the high accuracy of the simulation is well retained in the high temperature and pressure region. After the validation of the simulation method and the intermolecular potential models, we systematically simulated the PVTx properties of this binary system from 673 K and 0.05 GPa to 2573 K and 10 GPa. In order to integrate all the simulation results and the experimental data for the calculation of thermodynamic properties, an equation of state (EOS) is developed for the H2O-CH4 system covering 673 to 2573 K and 0.01 to 10 GPa. Isochores for compositions < 4 mol% CH4 up to 773 K and 600 MPa are also determined in this thesis.
Resumo:
Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.
Resumo:
The technique of energy extraction using groundwater source heat pumps, as a sustainable way of low-grade thermal energy utilization, has widely been used since mid-1990's. Based on the basic theories of groundwater flow and heat transfer and by employing two analytic models, the relationship of the thermal breakthrough time for a production well with the effect factors involved is analyzed and the impact of heat transfer by means of conduction and convection, under different groundwater velocity conditions, on geo-temperature field is discussed.A mathematical model, coupling the equations for groundwater flow with those for heat transfer, was developed. The impact of energy mining using a single well system of supplying and returning water on geo-temperature field under different hydrogeological conditions, well structures, withdraw-and-reinjection rates, and natural groundwater flow velocities was quantitatively simulated using the finite difference simulator HST3D. Theoretical analyses of the simulated results were also made. The simulated results of the single well system indicate that neither the permeability nor the porosity of a homogeneous aquifer has significant effect on the temperature of the production segment provided that the production and injection capability of each well in the aquifers involved can meet the designed value. If there exists a lower permeable interlayer, compared with the main aquifer, between the production and injection segments, the temperature changes of the production segment will decrease. The thicker the interlayer and the lower the interlayer permeability, the longer the thermal breakthrough time of the production segment and the smaller the temperature changes of the production segment. According to the above modeling, it can also be found that with the increase of the aquifer thickness, the distance between the production and injection screens, and/or the regional groundwater flow velocity, and/or the decrease of the production-and-reinjection rate, the temperature changes of the production segment decline. For an aquifer of a constant thickness, continuously increase the screen lengths of production and injection segments may lead to the decrease of the distance between the production and injection screens, and the temperature changes of the production segment will increase, consequently.According to the simulation results of the single well system, the parameters, that can cause significant influence on heat transfer as well as geo-temperature field, were chosen for doublet system simulation. It is indicated that the temperature changes of the pumping well will decrease as the aquifer thickness, the distance between the well pair and/or the screen lengths of the doublet increase. In the case of a low permeable interlayer embedding in the main aquifer, if the screens of the pumping and the injection wells are installed respectively below and above the interlayer, the temperature changes of the pumping well will be smaller than that without the interlay. The lower the permeability of the interlayer, the smaller the temperature changes. The simulation results also indicate that the lower the pumping-and-reinjection rate, the greater the temperature changes of the pumping well. It can also be found that if the producer and the injector are chosen reasonably, the temperature changes of the pumping well will decline as the regional groundwater flow velocity increases. Compared with the case that the groundwater flow direction is perpendicular to the well pair, if the regional flow is directed from the pumping well to the injection well, the temperature changes of the pumping well is relatively smaller.Based on the above simulation study, a case history was conducted using the data from an operating system in Beijing. By means of the conceptual model and the mathematical model, a 3-D simulation model was developed and the hydrogeological parameters and the thermal properties were calibrated. The calibrated model was used to predict the evolution of the geo-temperature field for the next five years. The simulation results indicate that the calibrated model can represent the hydrogeological conditions and the nature of the aquifers. It can also be found that the temperature fronts in high permeable aquifers move very fast and the radiuses of temperature influence are large. Comparatively, the temperature changes in clay layers are smaller and there is an obvious lag of the temperature changes. According to the current energy mining load, the temperature of the pumping wells will increase by 0.7°C at the end of the next five years. The above case study may provide reliable base for the scientific management of the operating system studied.
Resumo:
Landslides are widely distributed along the main stream banks of the Three Gorges Reservoir area. Especially with the acceleration of the human economic activities in the recent 30 years, the occurrence of landslide hazards in the local area trends to be more serious. Because of the special geological, topographic and climatic conditions of the Three Gorges areas, many Paleo-landslides are found along the gentle slope terrain of the population relocation sites. Under the natural condition, the Paleo-landslides usually keep stable. The Paleo-landslides might revive while they are influenced under the strong rainfall, water storage and migration engineering disturbance. Therefore, the prediction and prevention of landslide hazards have become the important problem involving with the safety of migration engineering of the Three Gorges Reservoir area.The past research on the landslides of the Three Gorges area is mainly concentrated on the stability analysis of individual landslide, and importance was little attached to the knowledge on the geological environment background of the formation of regional landslides. So, the relationship between distribution and evolution of landslides and globe dynamic processes was very scarce in the past research. With further study, it becomes difficult to explain the reasons for the magnitude and frequency of major geological hazards in terms of single endogenic or exogenic processes. It is possible to resolve the causes of major landslides in the Three Gorges area through the systematic research of regional tectonics and river evolution history.In present paper, based on the view of coupling of earth's endogenic and exogenic processes, the author researches the temporal and spacial distribution and formation evolution of major landslides(Volume^lOOX 104m3) in the Three Gorges Reservoir area through integration of first-hand sources statistics, .geological evolution history, isotope dating and numerical simulation method etc. And considering the main formation factors of landslides (topography, geology and rainfall condition), the author discusses the occurrence probability and prediction model of rainfall induced landslides.The distribution and magnitude of Paleo-landslides in the Three Gorges area is mainly controlled by lithology, geological structure, bank slope shape and geostress field etc. The major Paleo-landslides are concentrated on the periods 2.7-15.0 X 104aB.R, which conrresponds to the warm and wettest Paleoclimate stages. In the same time, the Three Gorges area experiences with the quickest crust uplift phase since 15.0X 104aB.P. It is indicated that the dynamic factor of polyphase major Paleo-landslides is the coupling processes of neotectonic movement and Quaternary climate changes. Based on the numerical simulation results of the formation evolution of Baota landslide, the quick crust uplift makes the deep river incision and the geostress relief causes the rock body of banks flexible. Under the strong rainfall condition, the pore-water pressure resulted from rain penetration and high flood level can have the shear strength of weak structural plane decrease to a great degree. Therefore, the bank slope is easy to slide at the slope bottom where shear stress concentrates. Finally, it forms the composite draught-traction type landslide of dip stratified rocks.The susceptibility idea for the rainfall induced landslide is put forward in this paper and the degree of susceptibility is graded in terms of the topography and geological conditions of landslides. Base on the integration with geological environment factors and rainfall condition, the author gives a new probabilistic prediction model for rainfall induced landslides. As an example from Chongqing City of the Three Gorges area, selecting the 5 factors of topography, lithology combination, slope shape, rock structure and hydrogeology and 21 kinds of status as prediction variables, the susceptibility zonation is carried out by information methods. The prediction criterion of landslides is established by two factors: the maximum 24 hour rainfall and the antecedent effective precipitation of 15 days. The new prediction model is possible to actualize the real-time regional landslide prediction and improve accuracy of landslide forecast.
Resumo:
This report describes Processor Coupling, a mechanism for controlling multiple ALUs on a single integrated circuit to exploit both instruction-level and inter-thread parallelism. A compiler statically schedules individual threads to discover available intra-thread instruction-level parallelism. The runtime scheduling mechanism interleaves threads, exploiting inter-thread parallelism to maintain high ALU utilization. ALUs are assigned to threads on a cycle byscycle basis, and several threads can be active concurrently. Simulation results show that Processor Coupling performs well both on single threaded and multi-threaded applications. The experiments address the effects of memory latencies, function unit latencies, and communication bandwidth between function units.
Resumo:
Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.
Resumo:
Sponsorship: EPSRC
Resumo:
A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.
Resumo:
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
Resumo:
The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).