965 resultados para COMPUTER ARCHITECTURE


Relevância:

60.00% 60.00%

Publicador:

Resumo:

On 70~(th) SEG Annual meeting, many author have announced their result on the wave equation pre-stack depth migration. The methods of the wave-field imaging base on wave equation becomes mature and the main direction of seismic imaging. The direction of imaging the complex media has been the main one of the projects that the national "85" and "95" reservoir geophysics key projects and "Knowledge innovation key project of Chinese Academy of Science" have been supported. Furthermore, we began the study for special oil field situation of our nation with the international research groups. Under the background, the author combined the thoughts of symplectic with wave equation pre-stack depth migration, and develops and efficient wave equation pre-stack depth migration method. The purpose of this work is to find out a way to imaging the complex geological goals of Chinese oilfields and form a procedure of seismic data processing. The paper gives the approximation of one way wave equation operator, and shows the numerical results. The comparisons have been made between split-step phase method, Kirchhoff and Ray+FD methods on the pulse response, simple model and Marmousi model. The result shows that the method in this paper has an higher accuracy. Four field data examples have also be given in this paper. The results of field data demonstrate that the method can be usable. The velocity estimation is an important part of the wave equation pre-stack depth migration. A. parallel velocity estimation program has been written and tested on the Beowulf clusters. The program can establish a velocity profile automatically. An example on Marmousi model has shown in the third part of the paper to demonstrate the method. Another field data was also given in the paper. Beowulf cluster is the converge of the high performance computer architecture. Today, Beowulf Cluster is a good choice for institutes and small companies to finish their task. The paper gives some comparison results the computation of the wave equation pre-stack migration on Beowulf cluster, IBM-SP2 (24 nodes) in Daqing and Shuguang3000, and the comparison of their prize. The results show that the Beowulf cluster is an efficient way to finish the large amount computation of the wave equation pre-stack depth migration, especially for 3D.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligenge has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multi-threaded processors execute multiple threads concurrently in order to increase overall throughput. It is well documented that multi-threading affects per-thread performance but, more importantly, some threads are affected more than others. This is especially troublesome for multi-programmed workloads. Fairness metrics measure whether all threads are affected equally. However defining equal treatment is not straightforward. Several fairness metrics for multi-threaded processors have been utilized in the literature, although there does not seem to be a consensus on what metric does the best job of measuring fairness. This paper reviews the prevalent fairness metrics and analyzes their main properties. Each metric strikes a different trade-off between fairness in the strict sense and throughput. We categorize the metrics with respect to this property. Based on experimental data for SMT processors, we suggest using the minimum fairness metric in order to balance fairness and throughput.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Non-Volatile Memory (NVM) technology holds promise to replace SRAM and DRAM at various levels of the memory hierarchy. The interest in NVM is motivated by the difficulty faced in scaling DRAM beyond 22 nm and, long-term, lower cost per bit. While offering higher density and negligible static power (leakage and refresh), NVM suffers increased latency and energy per memory access. This paper develops energy and performance models of memory systems and applies them to understand the energy-efficiency of replacing or complementing DRAM with NVM. Our analysis focusses on the application of NVM in main memory. We demonstrate that NVM such as STT-RAM and RRAM is energy-efficient for memory sizes commonly employed in servers and high-end workstations, but PCM is not. Furthermore, the model is well suited to quickly evaluate the impact of changes to the model parameters, which may be achieved through optimization of the memory architecture, and to determine the key parameters that impact system-level energy and performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Summary form only given, as follows. In Vol. 12, no. 3 (Summer 2007), page 9, bottom of the left column, in 'Computer Architecture and Amdahl??s Law' by Gene Amdahl, the claim about invalidating Amdahl??s Law in 1988 came from a team at Sandia National Laboratories, and not Los Alamos. The correct text should read: "Several years later I was informed of a proof that Amdahl's Law was invalidated by someone at Sandia National Laboratories, where a number of computers interconnected as an Ncube by communication lines, but with each computer also connected to I/O devices for loading the operating system, initial data, and results." On page 20 of the same issue, in the second sentence of the diagram explanation note by Justin Rattner, the percentage figures for the sequential and the system coordination parts of the workload were interchanged. The correct version of this sentence should read: "assuming a fixed sized problem, Amdahl speculated that most programs would require at least 10% of the computation to be sequential (only one instruction executing at a time), with overhead due to interprocessor coordination averaging 25%."