443 resultados para IBM


Relevância:

10.00% 10.00%

Publicador:

Resumo:

On 70~(th) SEG Annual meeting, many author have announced their result on the wave equation pre-stack depth migration. The methods of the wave-field imaging base on wave equation becomes mature and the main direction of seismic imaging. The direction of imaging the complex media has been the main one of the projects that the national "85" and "95" reservoir geophysics key projects and "Knowledge innovation key project of Chinese Academy of Science" have been supported. Furthermore, we began the study for special oil field situation of our nation with the international research groups. Under the background, the author combined the thoughts of symplectic with wave equation pre-stack depth migration, and develops and efficient wave equation pre-stack depth migration method. The purpose of this work is to find out a way to imaging the complex geological goals of Chinese oilfields and form a procedure of seismic data processing. The paper gives the approximation of one way wave equation operator, and shows the numerical results. The comparisons have been made between split-step phase method, Kirchhoff and Ray+FD methods on the pulse response, simple model and Marmousi model. The result shows that the method in this paper has an higher accuracy. Four field data examples have also be given in this paper. The results of field data demonstrate that the method can be usable. The velocity estimation is an important part of the wave equation pre-stack depth migration. A. parallel velocity estimation program has been written and tested on the Beowulf clusters. The program can establish a velocity profile automatically. An example on Marmousi model has shown in the third part of the paper to demonstrate the method. Another field data was also given in the paper. Beowulf cluster is the converge of the high performance computer architecture. Today, Beowulf Cluster is a good choice for institutes and small companies to finish their task. The paper gives some comparison results the computation of the wave equation pre-stack migration on Beowulf cluster, IBM-SP2 (24 nodes) in Daqing and Shuguang3000, and the comparison of their prize. The results show that the Beowulf cluster is an efficient way to finish the large amount computation of the wave equation pre-stack depth migration, especially for 3D.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a new high level robot programming system. The programming system can be used to construct strategies consisting of compliant motions, in which a moving robot slides along obstacles in its environment. The programming system is referred to as high level because the user is spared of many robot-level details, such as the specification of conditional tests, motion termination conditions, and compliance parameters. Instead, the user specifies task-level information, including a geometric model of the robot and its environment. The user may also have to specify some suggested motions. There are two main system components. The first component is an interactive teaching system which accepts motion commands from a user and attempts to build a compliant motion strategy using the specified motions as building blocks. The second component is an autonomous compliant motion planner, which is intended to spare the user from dealing with "simple" problems. The planner simplifies the representation of the environment by decomposing the configuration space of the robot into a finite state space, whose states are vertices, edges, faces, and combinations thereof. States are inked to each other by arcs, which represent reliable compliant motions. Using best first search, states are expanded until a strategy is found from the start state to a global state. This component represents one of the first implemented compliant motion planners. The programming system has been implemented on a Symbolics 3600 computer, and tested on several examples. One of the resulting compliant motion strategies was successfully executed on an IBM 7565 robot manipulator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

TEMPEST is a full-screen text editor that incorporates a structural paradigm in addition to the more traditional textual paradigm provided by most editors. While the textual paradigm treats the text as a sequence of characters, the structural paradigm treats it as a collection of named blocks which the user can define, group, and manipulate. Blocks can be defined to correspond to the structural features of he text, thereby providing more meaningful objects to operate on than characters of lines. The structural representation of the text is kept in the background, giving TEMPEST the appearance of a typical text editor. The structural and textual interfaces coexist equally, however, so one can always operate on the text from wither point of view. TEMPEST's representation scheme provides no semantic understanding of structure. This approach sacrifices depth, but affords a broad range of applicability and requires very little computational overhead. A prototype has been implemented to illustrate the feasibility and potential areas of application of the central ideas. It was developed and runs on an IBM Personal Computer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a tool called Gismo (Generator of Internet Streaming Media Objects and workloads). Gismo enables the specification of a number of streaming media access characteristics, including object popularity, temporal correlation of request, seasonal access patterns, user session durations, user interactivity times, and variable bit-rate (VBR) self-similarity and marginal distributions. The embodiment of these characteristics in Gismo enables the generation of realistic and scalable request streams for use in the benchmarking and comparative evaluation of Internet streaming media delivery techniques. To demonstrate the usefulness of Gismo, we present a case study that shows the importance of various workload characteristics in determining the effectiveness of proxy caching and server patching techniques in reducing bandwidth requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing diversity of Internet application requirements has spurred recent interest in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The control rules are parameterized so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. This paper presents a comprehensive study of a new spectrum of window-based congestion controls, which are TCP-friendly as well as TCP-compatible under RED. Our controls utilize history information in their control rules. By doing so, they improve the transient behavior, compared to recently proposed slowly-responsive congestion controls such as general AIMD and binomial controls. Our controls can achieve better tradeoffs among smoothness, aggressiveness, and responsiveness, and they can achieve faster convergence. We demonstrate analytically and through extensive ns simulations the steady-state and transient behavior of several instances of this new spectrum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper uses dynamic impulse response analysis to investigate the interrelationships among stock price volatility, trading volume, and the leverage effect. Dynamic impulse response analysis is a technique for analyzing the multi-step-ahead characteristics of a nonparametric estimate of the one-step conditional density of a strictly stationary process. The technique is the generalization to a nonlinear process of Sims-style impulse response analysis for linear models. In this paper, we refine the technique and apply it to a long panel of daily observations on the price and trading volume of four stocks actively traded on the NYSE: Boeing, Coca-Cola, IBM, and MMM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There have been few genuine success stories about industrial use of formal methods. Perhaps the best known and most celebrated is the use of Z by IBM (in collaboration with Oxford University's Programming Research Group) during the development of CICS/ESA (version 3.1). This work was rewarded with the prestigious Queen's Award for Technological Achievement in 1992 and is especially notable for two reasons: 1) because it is a commercial, rather than safety- or security-critical, system and 2) because the claims made about the effectiveness of Z are quantitative as well as qualitative. The most widely publicized claims are: less than half the normal number of customer-reported errors and a 9% savings in the total development costs of the release. This paper provides an independent assessment of the effectiveness of using Z on CICS based on the set of public domain documents. Using this evidence, we believe that the case study was important and valuable, but that the quantitative claims have not been substantiated. The intellectual arguments and rationale for formal methods are attractive, but their widespread commercial use is ultimately dependent upon more convincing quantitative demonstrations of effectiveness. Despite the pioneering efforts of IBM and PRG, there is still a need for rigorous, measurement-based case studies to assess when and how the methods are most effective. We describe how future similar case studies could be improved so that the results are more rigorous and conclusive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main purpose of this paper is to provide the core description of the modelling exercise within the Shelf Edge Advection Mortality And Recruitment (SEAMAR) programme. An individual-based model (IBM) was developed for the prediction of year-to-year survival of the early life-history stages of mackerel (Scomber scombrus) in the eastern North Atlantic. The IBM is one of two components of the model system. The first component is a circulation model to provide physical input data for the IBM. The circulation model is a geographical variant of the HAMburg Shelf Ocean Model (HAMSOM). The second component is the IBM, which is an i-space configuration model in which large numbers of individuals are followed as discrete entities to simulate the transport, growth and mortality of mackerel eggs, larvae and post-larvae. Larval and post-larval growth is modelled as a function of length, temperature and food distribution; mortality is modelled as a function of length and absolute growth rate. Each particle is considered as a super-individual representing 10 super(6) eggs at the outset of the simulation, and then declining according to the mortality function. Simulations were carried out for the years 1998-2000. Results showed concentrations of particles at Porcupine Bank and the adjacent Irish shelf, along the Celtic Sea shelf-edge, and in the southern Bay of Biscay. High survival was observed only at Porcupine and the adjacent shelf areas, and, more patchily, around the coastal margin of Biscay. The low survival along the shelf-edge of the Celtic Sea was due to the consistently low estimates of food availability in that area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An individual-based model (IBM) for the simulation of year-to-year survival during the early life-history stages of the north-east Atlantic stock of mackerel (Scomber scombrus) was developed within the EU funded Shelf-Edge Advection, Mortality and Recruitment (SEAMAR) programme. The IBM included transport, growth and survival and was used to track the passive movement of mackerel eggs, larvae and post-larvae and determine their distribution and abundance after approximately 2 months of drift. One of the main outputs from the IBM, namely distributions and numbers of surviving post-larvae, are compared with field data as recruit (age-0/age-1 juveniles) distribution and abundance for the years 1998, 1999 and 2000. The juvenile distributions show more inter-annual and spatial variability than the modelled distributions of survivors; this may be due to the restriction of using the same initial egg distribution for all 3 yr of simulation. The IBM simulations indicate two main recruitment areas for the north-east Atlantic stock of mackerel, these being Porcupine Bank and the south-eastern Bay of Biscay. These areas correspond to areas of high juvenile catches, although the juveniles generally have a more widespread distribution than the model simulations. The best agreement between modelled data and field data for distribution (juveniles and model survivors) is for the year 1998. The juvenile catches in different representative nursery areas are totalled to give a field abundance index (FAI). This index is compared with a model survivor index (MSI) which is calculated from the total of survivors for the whole spawning season. The MSI compares favourably with the FAI for 1998 and 1999 but not for 2000; in this year, juvenile catches dropped sharply compared with the previous years but there was no equivalent drop in modelled survivors.