820 resultados para Lanczos, Linear systems, Generalized cross validation
Resumo:
在区域水土流失模型研究中,空间插值可提供每个计算栅格的气象要素资料。考虑到研究区域降雨与高程相关性很弱,不宜采用梯度距离反比法(GIDS),故采用距离反比法(IDW)和普通克里格法(Kriging),对延安示范区及其周围共50个站点2000—2003年的5—10月逐月降雨量进行插值。交叉验证结果表明:对2种插值方法,二者经过对数变换后平均相对误差(MRE)为8.30%和7.67%,分别比原始数据插值后的MRE下降了23.17%和23.50%,说明插值精度得到了提升,对研究区域某一年逐月降水的插值Kriging方法比IDW方法更加精确。
Resumo:
Mapping the spatial distribution of contaminants in soils is the basis of pollution evaluation and risk control. Interpolation methods are extensively applied in the mapping processes to estimate the heavy metal concentrations at unsampled sites. The performances of interpolation methods (inverse distance weighting, local polynomial, ordinary kriging and radial basis functions) were assessed and compared using the root mean square error for cross validation. The results indicated that all interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals. However, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods. The difference in contaminated area estimation among the four methods reached 6.14%. According to the interpolation results, the spatial uncertainty of polluted areas was mainly located in three types of region: (a) the local maxima concentration region surrounded by low concentration (clean) sites, (b) the local minima concentration region surrounded with highly polluted samples; and (c) the boundaries of the contaminated areas. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
结合车轮沙土相互作用的力学分析,研究解决轮式移动机器人沙地行驶车轮过度滑转下陷问题。考虑纵列式重复通过车轮沙土力学参数的修正,建立六轮式沙地移动机器人的动力学模型,以车轮滑转率为状态变量,设计了移动机器人沙地行驶的滑模驱动控制器跟踪车轮期望滑转率。MATLAB/Simulink仿真结果表明,采用该控制器可以较快地跟踪期望滑转率,有效地限制机器人车轮的滑转,避免车轮的过度下陷。
Resumo:
Cross well seismic technique is a new type of geophysical method, which observes the seismic wave of the geologic body by placing both the source and receiver in the wells. By applying this method, it averted the absorption to high-frequency component of seismic signal caused by low weathering layers, thus, an extremely high-resolution seismic signal can be acquired. And extremely fine image of cross well formations, structure, and reservoir can be achieved as well. An integrated research is conducted to the high-frequency S-wave and P-wave data and some other data to determine the small faults, small structure and resolving the issues concerning the thin bed and reservoir's connectivity, fluid distribution, steam injection and fracture. This method connects the high-resolution surface seismic, logging and reservoir engineering. In this paper, based on the E & P situation in the oilfield and the theory of geophysical exploration, a research is conducted on cross well seismic technology in general and its important issues in cross well seismic technology in particular. A technological series of integrated field acquisition, data processing and interpretation and its integrated application research were developed and this new method can be applied to oilfield development and optimizing oilfield development scheme. The contents and results in this paper are as listed follows: An overview was given on the status quo and development of the cross well seismic method and problems concerning the cross well seismic technology and the difference in cross well seismic technology between China and international levels; And an analysis and comparison are given on foreign-made field data acquisition systems for cross-well seismic and pointed out the pros and cons of the field systems manufactured by these two foreign companies and this is highly valuable to import foreign-made cross well seismic field acquisition system for China. After analyses were conducted to the geometry design and field data for the cross well seismic method, a common wave field time-depth curve equation was derived and three types of pipe waves were discovered for the first time. Then, a research was conducted on the mechanism for its generation. Based on the wave field separation theory for cross well seismic method, we believe that different type of wave fields in different gather domain has different attributes characteristics, multiple methods (for instance, F-K filtering and median filtering) were applied in eliminating and suppressing the cross well disturbances and successfully separated the upgoing and downgoing waves and a satisfactory result has been achieved. In the area of wave field numerical simulation for cross well seismic method, a analysis was conducted on conventional ray tracing method and its shortcomings and proposed a minimum travel time ray tracing method based on Feraiat theory in this paper. This method is not only has high-speed calculation, but also with no rays enter into "dead end" or "blinded spot" after numerous iterations and it is become more adequate for complex velocity model. This is first time that the travel time interpolation has been brought into consideration, a dynamic ray tracing method with shortest possible path has been developed for the first arrivals of any complex mediums, such as transmission, diffraction and refraction, etc and eliminated the limitation for only traveling from one node to another node and increases the calculation accuracy for minimum travel time and ray tracing path and derives solution and corresponding edge conditions to the fourth-order differential sonic wave equation. The final step is to calculate cross well seismic synthetics for given source and receivers from multiple geological bodies. Thus, real cross-well seismic wave field can be recognized through scientific means and provides important foundation to guide the cross well seismic field geometry designing. A velocity tomographic inversion of the least square conjugated gradient method was developed for cross well seismic velocity tomopgraphic inversion and a modification has been made to object function of the old high frequency ray tracing method and put forward a thin bed oriented model for finite frequency velocity tomographic inversion method. As the theory model and results demonstrates that the method is simple and effective and is very important in seismic ray tomographic imaging for the complex geological body. Based on the characteristics of the cross well seismic algorithm, a processing flow for cross well seismic data processing has been built and optimized and applied to the production, a good section of velocity tomopgrphic inversion and cross well reflection imaging has been acquired. The cross well seismic data is acquired from the depth domain and how to interprets the depth domain data and retrieve the attributes is a brand new subject. After research was conducted on synthetics and trace integration from depth domain for the cross well seismic data interpretation, first of all, a research was conducted on logging constraint wave impedance of cross well seismic data and initially set up cross well seismic data interpretation flows. After it applied and interpreted to the cross well seismic data and a good geological results has been achieved in velocity tomographic inversion and reflection depth imaging and a lot of difficult problems for oilfield development has been resolved. This powerful, new method is good for oilfield development scheme optimization and increasing EOR. Based on conventional reservoir geological model building from logging data, a new method is also discussed on constraining the accuracy of reservoir geological model by applying the high resolution cross well seismic data and it has applied to Fan 124 project and a good results has been achieved which it presents a bight future for the cross well seismic technology.
Resumo:
"The Structure and Interpretation of Computer Programs" is the entry-level subject in Computer Science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in Electrical Engineering or in Computer Science, as one fourth of the "common core curriculum," which also includes two subjects on circuits and linear systems and a subject on the design of digital systems. We have been involved in the development of this subject since 1978, and we have taught this material in its present form since the fall of 1980 to approximately 600 students each year. Most of these students have had little or no prior formal training in computation, although most have played with computers a bit and a few have had extensive programming or hardware design experience. Our design of this introductory Computer Science subject reflects two major concerns. First we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Secondly, we believe that the essential material to be addressed by a subject at this level, is not the syntax of particular programming language constructs, nor clever algorithms for computing particular functions of efficiently, not even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems.
Resumo:
BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.
Resumo:
We investigate the problem of learning disjunctions of counting functions, which are general cases of parity and modulo functions, with equivalence and membership queries. We prove that, for any prime number p, the class of disjunctions of integer-weighted counting functions with modulus p over the domain Znq (or Zn) for any given integer q ≥ 2 is polynomial time learnable using at most n + 1 equivalence queries, where the hypotheses issued by the learner are disjunctions of at most n counting functions with weights from Zp. The result is obtained through learning linear systems over an arbitrary field. In general a counting function may have a composite modulus. We prove that, for any given integer q ≥ 2, over the domain Zn2, the class of read-once disjunctions of Boolean-weighted counting functions with modulus q is polynomial time learnable with only one equivalence query, and the class of disjunctions of log log n Boolean-weighted counting functions with modulus q is polynomial time learnable. Finally, we present an algorithm for learning graph-based counting functions.
Resumo:
Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.
Resumo:
© Institute of Mathematical Statistics, 2014.Motivated by recent findings in the field of consumer science, this paper evaluates the causal effect of debit cards on household consumption using population-based data from the Italy Survey on Household Income and Wealth (SHIW). Within the Rubin Causal Model, we focus on the estimand of population average treatment effect for the treated (PATT). We consider three existing estimators, based on regression, mixed matching and regression, propensity score weighting, and propose a new doubly-robust estimator. Semiparametric specification based on power series for the potential outcomes and the propensity score is adopted. Cross-validation is used to select the order of the power series. We conduct a simulation study to compare the performance of the estimators. The key assumptions, overlap and unconfoundedness, are systematically assessed and validated in the application. Our empirical results suggest statistically significant positive effects of debit cards on the monthly household spending in Italy.
Resumo:
The parallelization of existing/industrial electromagnetic software using the bulk synchronous parallel (BSP) computation model is presented. The software employs the finite element method with a preconditioned conjugate gradient-type solution for the resulting linear systems of equations. A geometric mesh-partitioning approach is applied within the BSP framework for the assembly and solution phases of the finite element computation. This is combined with a nongeometric, data-driven parallel quadrature procedure for the evaluation of right-hand-side terms in applications involving coil fields. A similar parallel decomposition is applied to the parallel calculation of electron beam trajectories required for the design of tube devices. The BSP parallelization approach adopted is fully portable, conceptually simple, and cost-effective, and it can be applied to a wide range of finite element applications not necessarily related to electromagnetics.
Resumo:
This paper discusses preconditioned Krylov subspace methods for solving large scale linear systems that originate from oil reservoir numerical simulations. Two types of preconditioners, one being based on an incomplete LU decomposition and the other being based on iterative algorithms, are used together in a combination strategy in order to achieve an adaptive and efficient preconditioner. Numerical tests show that different Krylov subspace methods combining with appropriate preconditioners are able to achieve optimal performance.
Resumo:
Diatoms exist in almost every aquatic regime; they are responsible for 20% of global carbon fixation and 25% of global primary production, and are regarded as a key food for copepods, which are subsequently consumed by larger predators such as fish and marine mammals. A decreasing abundance and a vulnerability to climatic change in the North Atlantic Ocean have been reported in the literature. In the present work, a data matrix composed of concurrent satellite remote sensing and Continuous Plankton Recorder (CPR) in situ measurements was collated for the same spatial and temporal coverage in the Northeast Atlantic. Artificial neural networks (ANNs) were applied to recognize and learn the complex non-monotonic and non-linear relationships between diatom abundance and spatiotemporal environmental factors. Because of their ability to mimic non-linear systems, ANNs proved far more effective in modelling the diatom distribution in the marine ecosystem. The results of this study reveal that diatoms have a regular seasonal cycle, with their abundance most strongly influenced by sea surface temperature (SST) and light intensity. The models indicate that extreme positive SSTs decrease diatom abundances regardless of other climatic conditions. These results provide information on the ecology of diatoms that may advance our understanding of the potential response of diatoms to climatic change.
Resumo:
A comparative molecular field analysis (CoMFA) of alkanoic acid 3-oxo-cyclohex-1-enyl ester and 2-acylcyclohexane-1,3-dione derivatives of 4-hydroxyphenylpyruvate dioxygenase inhibitors has been performed to determine the factors required for the activity of these compounds. The substrate's conformation abstracted from dynamic modeling of the enzyme-substrate complex was used to build the initial structures of the inhibitors. Satisfactory results were obtained after an all-space searching procedure, performing a leave-one out (LOO) cross-validation study with cross-validation q(2) and conventional r(2) values of 0.779 and 0.989, respectively. The results provide the tools for predicting the affinity of related compounds, and for guiding the design and synthesis of new HPPD ligands with predetermined affinities.
Resumo:
We present a multimodal detection and tracking algorithm for sensors composed of a camera mounted between two microphones. Target localization is performed on color-based change detection in the video modality and on time difference of arrival (TDOA) estimation between the two microphones in the audio modality. The TDOA is computed by multiband generalized cross correlation (GCC) analysis. The estimated directions of arrival are then postprocessed using a Riccati Kalman filter. The visual and audio estimates are finally integrated, at the likelihood level, into a particle filter (PF) that uses a zero-order motion model, and a weighted probabilistic data association (WPDA) scheme. We demonstrate that the Kalman filtering (KF) improves the accuracy of the audio source localization and that the WPDA helps to enhance the tracking performance of sensor fusion in reverberant scenarios. The combination of multiband GCC, KF, and WPDA within the particle filtering framework improves the performance of the algorithm in noisy scenarios. We also show how the proposed audiovisual tracker summarizes the observed scene by generating metadata that can be transmitted to other network nodes instead of transmitting the raw images and can be used for very low bit rate communication. Moreover, the generated metadata can also be used to detect and monitor events of interest.
Resumo:
Ground-penetrating radar (GPR) is a rapid geophysical technique that we have used to assess four illegally buried waste locations in Northern Ireland. GPR allowed informed positioning of the less-rapid, if more accurate use of electrical resistivity imaging (ERI). In conductive waste, GPR signal loss can be used to map the areal extent of waste, allowing ERI survey lines to be positioned. In less conductive waste the geometry of the burial can be ascertained from GPR alone, allowing rapid assessment. In both circumstances, the conjunctive use of GPR and ERI is considered best practice for cross-validation of results and enhancing data interpretation.