884 resultados para T-Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last several decades, due to the fast development of computer, numerical simulation has been an indispensable tool in scientific research. Numerical simulation methods which based on partial difference operators such as Finite Difference Method (FDM) and Finite Element Method (FEM) have been widely used. However, in the realm of seismology and seismic prospecting, one usually meets with geological models which have piece-wise heterogeneous structures as well as volume heterogeneities between layers, the continuity of displacement and stress across the irregular layers and seismic wave scattering induced by the perturbation of the volume usually bring in error when using conventional methods based on difference operators. The method discussed in this paper is based on elastic theory and integral theory. Seismic wave equation in the frequency domain is transformed into a generalized Lippmann-Schwinger equation, in which the seismic wavefield contributed by the background is expressed by the boundary integral equation and the scattering by the volume heterogeneities is considered. Boundary element-volume integral method based on this equation has advantages of Boundary Element Method (BEM), such as reducing one dimension of the model, explicit use the displacement and stress continuity across irregular interfaces, high precision, satisfying the boundary at infinite, etc. Also, this method could accurately simulate the seismic scattering by the volume heterogeneities. In this paper, the concrete Lippmann-Schwinger equation is specifically given according to the real geological models. Also, the complete coefficients of the non-smooth point for the integral equation are introduced. Because Boundary Element-Volume integral equation method uses fundamental solutions which are singular when the source point and the field are very close,both in the two dimensional and the three dimensional case, the treatment of the singular kernel affects the precision of this method. The method based on integral transform and integration by parts could treat the points on the boundary and inside the domain. It could transform the singular integral into an analytical one both in two dimensional and in three dimensional cases and thus it could eliminate the singularity. In order to analyze the elastic seismic wave scattering due to regional irregular topographies, the analytical solution for problems of this type is discussed and the analytical solution of P waves by multiple canyons is given. For the boundary reflection, the method used here is infinite boundary element absorbing boundary developed by a pervious researcher. The comparison between the analytical solutions and concrete numerical examples validate the efficiency of this method. We thoroughly discussed the sampling frequency in elastic wave simulation and find that, for a general case, three elements per wavelength is sufficient, however, when the problem is too complex, more elements per wavelength are necessary. Also, the seismic response in the frequency domain of the canyons with different types of random heterogeneities is illustrated. We analyzed the model of the random media, the horizontal and vertical correlation length, the standard deviation, and the dimensionless frequency how to affect the seismic wave amplification on the ground, and thus provide a basis for the choice of the parameter of random media during numerical simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The real earth is far away from an ideal elastic ball. The movement of structures or fluid and scattering of thin-layer would inevitably affect seismic wave propagation, which is demonstrated mainly as energy nongeometrical attenuation. Today, most of theoretical researches and applications take the assumption that all media studied are fully elastic. Ignoring the viscoelastic property would, in some circumstances, lead to amplitude and phase distortion, which will indirectly affect extraction of traveltime and waveform we use in imaging and inversion. In order to investigate the response of seismic wave propagation and improve the imaging and inversion quality in complex media, we need not only consider into attenuation of the real media but also implement it by means of efficient numerical methods and imaging techniques. As for numerical modeling, most widely used methods, such as finite difference, finite element and pseudospectral algorithms, have difficulty in dealing with problem of simultaneously improving accuracy and efficiency in computation. To partially overcome this difficulty, this paper devises a matrix differentiator method and an optimal convolutional differentiator method based on staggered-grid Fourier pseudospectral differentiation, and a staggered-grid optimal Shannon singular kernel convolutional differentiator by function distribution theory, which then are used to study seismic wave propagation in viscoelastic media. Results through comparisons and accuracy analysis demonstrate that optimal convolutional differentiator methods can solve well the incompatibility between accuracy and efficiency, and are almost twice more accurate than the same-length finite difference. They can efficiently reduce dispersion and provide high-precision waveform data. On the basis of frequency-domain wavefield modeling, we discuss how to directly solve linear equations and point out that when compared to the time-domain methods, frequency-domain methods would be more convenient to handle the multi-source problem and be much easier to incorporate medium attenuation. We also prove the equivalence of the time- and frequency-domain methods by using numerical tests when assumptions with non-relaxation modulus and quality factor are made, and analyze the reason that causes waveform difference. In frequency-domain waveform inversion, experiments have been conducted with transmission, crosshole and reflection data. By using the relation between media scales and characteristic frequencies, we analyze the capacity of the frequency-domain sequential inversion method in anti-noising and dealing with non-uniqueness of nonlinear optimization. In crosshole experiments, we find the main sources of inversion error and figure out how incorrect quality factor would affect inverted results. When dealing with surface reflection data, several frequencies have been chosen with optimal frequency selection strategy, with which we use to carry out sequential and simultaneous inversions to verify how important low frequency data are to the inverted results and the functionality of simultaneous inversion in anti-noising. Finally, I come with some conclusions about the whole work I have done in this dissertation and discuss detailly the existing and would-be problems in it. I also point out the possible directions and theories we should go and deepen, which, to some extent, would provide a helpful reference to researchers who are interested in seismic wave propagation and imaging in complex media.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How to create a new method to solve the problem or reduce the influence of that the result of the seismic waves scattering nonlinear inversion is not uniqueness is a main purpose of this research work in the paper. On the background of research into the seismic inversion, new progress of the nonlinear inversion is introduced at the first chapter in this paper. Especially, the development, basic theories and assumptions on some major theories of seismic inversion are analyzed, discussed and summarized in mathematics and physics. Also, the problems faced by the mathematical basis of investigations of the seismic inversion are discussed, and inverse questions of strongly seismic scattering due to strong heterogeneous media in the Earth interior are analyzed and viewed. What the kernel of paper is that gathers all our attention making a new nonlinear inversion method of seismic scattering. The paper provides a theory and method of how to introduce the fixed-point theory into the nonlinear seismic scattering inversion and how to obtain the solution, and gives the actually method to create a serials of contractive mappings of velocity parameter's in the mapping space of wave. Therefore, the results testify the existence of fixed point of velocity parameter and give the method the find it. Further, the paper proves the conclusion that the value obtained by taking the fixed point of velocity parameter into wave equation is the fixed point of the wave of the contractive mapping. Thence, the fixed point is the global minima since the stabilities quality of the fixed point. Based on the new theory, in the chapter three, many inverse results are obtained in the numerical value test. By analysis the results one could find a basic facts that all the results, which are inversed by the different initial model, are tended to the true value in theoretical true model. In other words, the new method mostly eliminates the non-uniqueness that which is existed in seismic waves scattering nonlinear inversion in degree. But, since the test results are quite finite now, more test is need here to positive our theory. As a new theoretical method, it must be existed many weaken in it. The chapter four points out all the questions which is bother us. We hope more people to join us to solve the problem together.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Based on social survey data conducted by local research group in some counties executed in the nearly past five years in China, the author proposed and solved two kernel problems in the field of social situation forecasting: i) How can the attitudes’ data on individual level be integrated with social situation data on macrolevel; ii) How can the powers of forecasting models’ constructed by different statistic methods be compared? Five integrative statistics were applied to the research: 1) algorithm average (MEAN); 2) standard deviation (SD); 3) coefficient variability (CV); 4) mixed secondary moment (M2); 5) Tendency (TD). To solve the former problem, the five statistics were taken to synthesize the individual and mocrolevel data of social situations on the levels of counties’ regions, and form novel integrative datasets, from the basis of which, the latter problem was accomplished by the author: modeling methods such as Multiple Regression Analysis (MRA), Discriminant Analysis (DA) and Support Vector Machine (SVM) were used to construct several forecasting models. Meanwhile, on the dimensions of stepwise vs. enter, short-term vs. long-term forecasting and different integrative (statistic) models, meta-analysis and power analysis were taken to compare the predicting power of each model within and among modeling methods. Finally, it can be concluded from the research of the dissertation: 1) Exactly significant difference exists among different integrative (statistic) models, in which, tendency (TD) integrative models have the highest power, but coefficient variability (CV) ones have the lowest; 2) There is no significant difference of the power between stepwise and enter models as well as short-term and long-term forecasting models; 3) There is significant difference among models constructed by different methods, of which, support vector machine (SVM) has the highest statistic power. This research founded basis in all facets for exploring the optimal forecasting models of social situation’s more deeply, further more, it is the first time methods of meta-analysis and power analysis were immersed into the assessments of such forecasting models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The STUDENT problem solving system, programmed in LISP, accepts as input a comfortable but restricted subset of English which can express a wide variety of algebra story problems. STUDENT finds the solution to a large class of these problems. STUDENT can utilize a store of global information not specific to any one problem, and may make assumptions about the interpretation of ambiguities in the wording of the problem being solved. If it uses such information or makes any assumptions, STUDENT communicates this fact to the user. The thesis includes a summary of other English language questions-answering systems. All these systems, and STUDENT, are evaluated according to four standard criteria. The linguistic analysis in STUDENT is a first approximation to the analytic portion of a semantic theory of discourse outlined in the thesis. STUDENT finds the set of kernel sentences which are the base of the input discourse, and transforms this sequence of kernel sentences into a set of simultaneous equations which form the semantic base of the STUDENT system. STUDENT then tries to solve this set of equations for the values of requested unknowns. If it is successful it gives the answers in English. If not, STUDENT asks the user for more information, and indicates the nature of the desired information. The STUDENT system is a first step toward natural language communication with computers. Further work on the semantic theory proposed should result in much more sophisticated systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C.G.G. Aitken, Q. Shen, R. Jensen and B. Hayes. The evaluation of evidence for exponentially distributed data. Computational Statistics & Data Analysis, vol. 51, no. 12, pp. 5682-5693, 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extensible systems allow services to be configured and deployed for the specific needs of individual applications. This paper describes a safe and efficient method for user-level extensibility that requires only minimal changes to the kernel. A sandboxing technique is described that supports multiple logical protection domains within the same address space at user-level. This approach allows applications to register sandboxed code with the system, that may be executed in the context of any process. Our approach differs from other implementations that require special hardware support, such as segmentation or tagged translation look-aside buffers (TLBs), to either implement multiple protection domains in a single address space, or to support fast switching between address spaces. Likewise, we do not require the entire system to be written in a type-safe language, to provide fine-grained protection domains. Instead, our user-level sandboxing technique requires only paged-based virtual memory support, and the requirement that extension code is written either in a type-safe language, or by a trusted source. Using a fast method of upcalls, we show how our sandboxing technique for implementing logical protection domains provides significant performance improvements over traditional methods of invoking user-level services. Experimental results show our approach to be an efficient method for extensibility, with inter-protection domain communication costs close to those of hardware-based solutions leveraging segmentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internet Traffic Managers (ITMs) are special machines placed at strategic places in the Internet. itmBench is an interface that allows users (e.g. network managers, service providers, or experimental researchers) to register different traffic control functionalities to run on one ITM or an overlay of ITMs. Thus itmBench offers a tool that is extensible and powerful yet easy to maintain. ITM traffic control applications could be developed either using a kernel API so they run in kernel space, or using a user-space API so they run in user space. We demonstrate the flexibility of itmBench by showing the implementation of both a kernel module that provides a differentiated network service, and a user-space module that provides an overlay routing service. Our itmBench Linux-based prototype is free software and can be obtained from http://www.cs.bu.edu/groups/itm/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current low-level networking abstractions on modern operating systems are commonly implemented in the kernel to provide sufficient performance for general purpose applications. However, it is desirable for high performance applications to have more control over the networking subsystem to support optimizations for their specific needs. One approach is to allow networking services to be implemented at user-level. Unfortunately, this typically incurs costs due to scheduling overheads and unnecessary data copying via the kernel. In this paper, we describe a method to implement efficient application-specific network service extensions at user-level, that removes the cost of scheduling and provides protected access to lower-level system abstractions. We present a networking implementation that, with minor modifications to the Linux kernel, passes data between "sandboxed" extensions and the Ethernet device without copying or processing in the kernel. Using this mechanism, we put a customizable networking stack into a user-level sandbox and show how it can be used to efficiently process and forward data via proxies, or intermediate hosts, in the communication path of high performance data streams. Unlike other user-level networking implementations, our method makes no special hardware requirements to avoid unnecessary data copies. Results show that we achieve a substantial increase in throughput over comparable user-space methods using our networking stack implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines how and why web server performance changes as the workload at the server varies. We measure the performance of a PC acting as a standalone web server, running Apache on top of Linux. We use two important tools to understand what aspects of software architecture and implementation determine performance at the server. The first is a tool that we developed, called WebMonitor, which measures activity and resource consumption, both in the operating system and in the web server. The second is the kernel profiling facility distributed as part of Linux. We vary the workload at the server along two important dimensions: the number of clients concurrently accessing the server, and the size of the documents stored on the server. Our results quantify and show how more clients and larger files stress the web server and operating system in different and surprising ways. Our results also show the importance of fixed costs (i.e., opening and closing TCP connections, and updating the server log) in determining web server performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growing interest in inference and prediction of network characteristics is justified by its importance for a variety of network-aware applications. One widely adopted strategy to characterize network conditions relies on active, end-to-end probing of the network. Active end-to-end probing techniques differ in (1) the structural composition of the probes they use (e.g., number and size of packets, the destination of various packets, the protocols used, etc.), (2) the entity making the measurements (e.g. sender vs. receiver), and (3) the techniques used to combine measurements in order to infer specific metrics of interest. In this paper, we present Periscope: a Linux API that enables the definition of new probing structures and inference techniques from user space through a flexible interface. PeriScope requires no support from clients beyond the ability to respond to ICMP ECHO REQUESTs and is designed to minimize user/kernel crossings and to ensure various constraints (e.g., back-to-back packet transmissions, fine-grained timing measurements) We show how to use Periscope for two different probing purposes, namely the measurement of shared packet losses between pairs of endpoints and for the measurement of subpath bandwidth. Results from Internet experiments for both of these goals are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object detection is challenging when the object class exhibits large within-class variations. In this work, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly learned in a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. Detector training can be accomplished via standard SVM learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the foreground parameters are provided in training, the detectors can also produce parameter estimate. When the foreground object masks are provided in training, the detectors can also produce object segmentation. The advantages of our method over past methods are demonstrated on data sets of human hands and vehicles.