988 resultados para problem-finding


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summer diets of two sympatric raptors Upland Buzzards (Buteo hemilasius Temminck et Schlegel) and Eurasian Eagle Owls (Bubo bubo L. subsp. Hemachalana Hume) were studied in an alpine meadow (3250 m a.s.l.) on Qinghai-Tibet Plateau, China. Root voles Microtus oeconomus Pallas, plateau pikas Ochotona curzoniae Hodgson, Gansu pikas O. cansus Lyon and plateau zokors Myospalax baileyi Thomas were the main diet components of Upland Buzzards as identified through the pellets analysis with the frequency of 57, 20, 19 and 4%, respectively. The four rodent species also were the main diet components of Eurasian Eagle Owls basing on the pellets and prey leftovers analysis with the frequency of 53, 26, 13 and 5%, respectively. The food niche breadth indexes of Upland Buzzards and Eurasian Eagle Owls were 1.60 and 1.77 respectively (higher value of the index means the food niche of the raptor is broader), and the diet overlap index of the two raptors was larger (C-ue = 0.90) (the index range from 0 - no overlap - to I - complete overlap). It means that the diets of Upland Buzzards and Eurasian Eagle Owls were similar (Two Related Samples Test, Z = -0.752, P = 0.452). The classical resource partitioning theory can not explain the coexistence of Upland Buzzards and Eurasian Eagle Owls in alpine meadows of Qinghai-Tibet Plateau. However, differences in body size, predation mode and activity rhythm between Upland Buzzards and Eurasian Eagle Owls may explain the coexistence of these two sympatric raptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

测量数据的精确定位是实现复杂曲面加工检测的关键,针对测量点云数据与NURBS表示的CAD自由曲面模型匹配中求最近点计算方面存在的问题,提出了一种简单、有效的寻找最近点的方法。该方法与由测量点集评估给定曲面上的最近点的传统算法相反,采用点集曲面(point set surface,PSS)投影算法,对给定自由曲面模型上有限个点与不附加任何几何和拓扑信息的散乱点集之间进行粗匹配获得初始位置,进而以最近点迭代算法(ICP)完成测量数据定位的精确调整,达到全局及局部最优的目标。实验结果表明,采用PSS投影算法法寻找最近点不仅效率高,而且能得到全局匹配结果,可以为精匹配提供较好的计算初值,减少了ICP算法进行二次匹配时,迭代次数及执行时间并且精度得到了较大提高。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PetroChina and other national petroleum incorporations need rigorous procedures and practical methods in risk evaluation and exploration decision at home and abroad to safeguard their international exploration practice in exploration licence bidding, finding appropriate ratio of risk sharing with partners, as well as avoiding high risk projects and other key exploration activities. However, due to historical reasons, we are only at the beginning of a full study and methodology development in exploration risk evaluation and decision. No rigorous procedure and practical methods are available in our exercises of international exploration. Completely adopting foreign procedure, methods and tools by our national incorporations are not practical because of the differences of the current economic and management systems in China. The objective of this study is to establish a risk evaluation and decision system with independent intellectual property right in oil and gas exploration so that a smooth transition from our current practice into international norm can take place. The system developed in this dissertation includes the following four components: 1. A set of quantitative criteria for risk evaluation is derived on the basis of an anatomy of the parameters from thirty calibration regions national wide as well as the characteristics and the geological factors controlling oil and gas occurrence in the major petroleum-bearing basins in China, which provides the technical support for the risk quantification in oil and gas exploration. 2. Through analysis of existing methodology, procedure and methods of exploration risk evaluation considering spatial information are proposed. The method, utilizing Mahalanobis Distance (MD) and fuzzy logic for data and information integration, provides probabilistic models on the basis of MD and fuzzy logic classification criteria, thus quantifying the exploration risk using Bayesian theory. A projection of the geological risk into spatial domain provides a probability map of oil and gas occurrence in the area under study. The application of this method to the Nanpu Sag shows that this method not only correctly predicted the oil and gas occurrence in the areas where Beibu and Laoyemiao oil fields are found in the northwest of the onshore area, but also predicted Laopu south, Nanpu south and Hatuo potential areas in the offshore part where exploration maturity was very low. The prediction of the potential areas are subsequently confirmed by 17 exploration wells in the offshore area with 81% success, indicating this method is very effective for exploration risk visualization and reduction. 3. On the basis of “Methods and parameters of economic evaluation for petroleum exploration and development projects in China”, a ”pyramid” method for sensitivity analysis was developed, which meets not only the need for exploration target evaluation and exploration decision at home, but also allows a transition from our current practice to international norm in exploration decision. This provides the foundation for the development of a software product “Exploration economic evaluation and decision system of PetroChina” (EDSys). 4. To solve problem in methodology of exploration decision, effort was made on the method of project portfolio management. A drilling decision method was developed employing the concept of geologically risked net present value. This method overcame the dilemma of handling simultaneously both geological risk and portfolio uncertainty, thus casting light into the application of modern portfolio theory to the evaluation of high risk petroleum exploration projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Characterization of Platinum Group Elements (PGE) has been applied to earth, space and environmental sciences. However, all these applications are based on a basic prerequisite, i.e. their concentration or ratio in the research objects can be accurately and precisely determined. In fact, development in these related studies is a great challenge to the analytical chemistry of the PGE because their content in the geological sample (non-mineralized) is often extremely low, range from ppt (10~(-12)g/g) to ppt (10~(-9)g/g). Their distribution is highly heterogeneous, usually concentrating in single particle or phase. Therefore, the accurate determination of these elements remains a problem in analytical chemistry and it obstructs the research on geochemistry of PGE. A great effort has been made in scientific community to reliable determining of very low amounts of PGE, which has been focused on to reduce the level of background in used reagents and to solve probable heterogeneity of PGE in samples. Undoubtedly, the fire-assay method is one of the best ways for solving the heterogeneity, as a large amount of sample weight (10-50g) can be hold. This page is mainly aimed at development of the methodology on separation, concentration and determination of the ultra-trace PGE in the rock and peat samples, and then they are applied to study the trace of PGE in ophiolite suite, in Kudi, West Kunlun and Tunguska explosion in 1908. The achievements of the study are summarized as follows: 1. A PGE lab is established in the Laboratory of Lithosphere Tectonic Evolution, IGG, CAS. 2. A modified method of determination of PGE in geological samples using NiS Fire-Assay with inductively coupled plasma-mass spectrometry (ICP-MS) is set up. The technical improvements are made as following: (1) investigating the level of background in used reagents, and finding the contents of Au, Pt and Pd in carbonyl nickel powder are 30, 0.6 and 0.6ng/g, respectively and 0.35, 7.5 and 6.4ng, respectively in other flux, and the contents of Ru, Rh, Os in whole reagents used are very low (below or near the detection limits of ICP-MS); (2) measuring the recoveries of PGE using different collector (Ni+S) and finding 1.5g of carbonyl nickel is effective for recovering the PGE for 15g samples (recoveries are more than 90%), reducing the inherent blank value due to impurities reagents; (3) direct dissolving nickel button in Teflon bomb and using Te-precipitation, so reducing the loss of PGE during preconcentration process and improving the recoveries of PGE (above 60% for Os and 93.6-106.3% for other PGE, using 2g carbonyl nickel); (4) simplifying the procedure of analyzing Osmium; (5)method detection limits are 8.6, 4.8, 43, 2.4, 82pg/g for 15g sample size ofRu, Rh, Pd, Ir, Pt, respectively. 3. An analytical method is set up to determine the content of ultra-trace PGE in peat samples. The method detection limits are 0.06, 0.1, 0.001, 0.001 and 0.002ng/mL for Ru, Rh, Pd, Ir and Pt, respectively. 4. Distinct anomaly of Pd and Os are firstly found in the peat sampling near the Tunguska explosion site, using the analytical method. 5. Applying the method to the study on the origin of Tunguska explosion and making the following conclusions: (1) these excess elements were likely resulted from the Tunguska Cosmic Body (TCB) explosion of 1908. (2) The Tunguska explosive body was composed of materials (solid components) similar to C1 chondrite, and, most probably, a cometary object, which weighed more than 10~7 tons and had a radius of more than 126 m. 6. The analysis method about ultra-trace PGE in rock samples is successfully used in the study on the characteristic of PGE in Kudi ophiolite suite and the following conclusions are made: (1) The difference of the mantle normalization of PGE patterns between dunite, harzburgite and lherzolite in Kudi indicates that they are residual of multi-stage partial melt of the mantle. Their depletion of Ir at a similar degree probably indicates the existence of an upper mantle depleted Ir. (2) With the evolution of the magma produced by the partial melt of the mantle, strong differentiation has been shown between IPGE and PPGE; and the differentiation from pyroxenite to basalt would have been more and more distinct. (3) The magma forming ophiolite in Kudi probably suffered S-saturation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordos Basin is a typical cratonic petroliferous basin with 40 oil-gas bearing bed sets. It is featured as stable multicycle sedimentation, gentle formation, and less structures. The reservoir beds in Upper Paleozoic and Mesozoicare are mainly low density, low permeability, strong lateral change, and strong vertical heterogeneous. The well-known Loess Plateau in the southern area and Maowusu Desert, Kubuqi Desert and Ordos Grasslands in the northern area cover the basin, so seismic data acquisition in this area is very difficult and the data often takes on inadequate precision, strong interference, low signal-noise ratio, and low resolution. Because of the complicated condition of the surface and the underground, it is very difficult to distinguish the thin beds and study the land facies high-resolution lithologic sequence stratigraphy according to routine seismic profile. Therefore, a method, which have clearly physical significance, based on advanced mathematical physics theory and algorithmic and can improve the precision of the detection on the thin sand-peat interbed configurations of land facies, is in demand to put forward.Generalized S Transform (GST) processing method provides a new method of phase space analysis for seismic data. Compared with wavelet transform, both of them have very good localization characteristics; however, directly related to the Fourier spectra, GST has clearer physical significance, moreover, GST adopts a technology to best approach seismic wavelets and transforms the seismic data into time-scale domain, and breaks through the limit of the fixed wavelet in S transform, so GST has extensive adaptability. Based on tracing the development of the ideas and theories from wavelet transform, S transform to GST, we studied how to improve the precision of the detection on the thin stratum by GST.Noise has strong influence on sequence detecting in GST, especially in the low signal-noise ratio data. We studied the distribution rule of colored noise in GST domain, and proposed a technology to distinguish the signal and noise in GST domain. We discussed two types of noises: white noise and red noise, in which noise satisfy statistical autoregression model. For these two model, the noise-signal detection technology based on GST all get good result. It proved that the GST domain noise-signal detection technology could be used to real seismic data, and could effectively avoid noise influence on seismic sequence detecting.On the seismic profile after GST processing, high amplitude energy intensive zone, schollen, strip and lentoid dead zone and disarray zone maybe represent specifically geologic meanings according to given geologic background. Using seismic sequence detection profile and combining other seismic interpretation technologies, we can elaborate depict the shape of palaeo-geomorphology, effectively estimate sand stretch, distinguish sedimentary facies, determine target area, and directly guide oil-gas exploration.In the lateral reservoir prediction in XF oilfield of Ordos Basin, it played very important role in the estimation of sand stretch that the study of palaeo-geomorphology of Triassic System and the partition of inner sequence of the stratum group. According to the high-resolution seismic profile after GST processing, we pointed out that the C8 Member of Yanchang Formation in DZ area and C8 Member in BM area are the same deposit. It provided the foundation for getting 430 million tons predicting reserves and unite building 3 million tons off-take potential.In tackling key problem study for SLG gas-field, according to the high-resolution seismic sequence profile, we determined that the deposit direction of H8 member is approximately N-S or NNE-SS W. Using the seismic sequence profile, combining with layer-level profile, we can interpret the shape of entrenched stream. The sunken lenticle indicates the high-energy stream channel, which has stronger hydropower. By this way we drew out three high-energy stream channels' outline, and determined the target areas for exploitation. Finding high-energy braided river by high-resolution sequence processing is the key technology in SLG area.In ZZ area, we studied the distribution of the main reservoir bed-S23, which is shallow delta thin sand bed, by GST processing. From the seismic sequence profile, we discovered that the schollen thick sand beds are only local distributed, and most of them are distributary channel sand and distributary bar deposit. Then we determined that the S23 sand deposit direction is NW-SE in west, N-S in central and NE-SW in east. The high detecting seismic sequence interpretation profiles have been tested by 14 wells, 2 wells mismatch and the coincidence rate is 85.7%. Based on the profiles we suggested 3 predicted wells, one well (Yu54) completed and the other two is still drilling. The completed on Is coincident with the forecastThe paper testified that GST is a effective technology to get high- resolution seismic sequence profile, compartmentalize deposit microfacies, confirm strike direction of sandstone and make sure of the distribution range of oil-gas bearing sandstone, and is the gordian technique for the exploration of lithologic gas-oil pool in complicated areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Zeigarnik effect refers to the enhanced memory performance for unfinished tasks and studies on insight using hemi-visual field presentation technology also find that after failing to solve an problem, hints to the problem are more effective received and lead to insight experience when presented to the left-visual field (Right hemisphere) than presented to the right-visual field, especial when the hints appeared with a delay. Thus, it seems that right hemisphere may play an important role in preserving information of unsolved problems and processing related cues. To further examine the finding above, we introduce an Chinese character chunking task to investigate the brain activities during the stage of failure to resolve problems and of hint presentation using Event-Related Potentials (ERP) and functional MRI technology. Our FMRI results found that bilateral BA10 showed more activation when seeing hints for unsolved problems and we proposed that it may reflect the processes of information to failure problems, howerver, there was no hemispheric difference. The ERP results after the effort to the problems showed that unsolved problems elicited a more positive P150 over the right frontal cortex while solved problems demonstrated a left hemispheric advantage of P150. When hints present, P2 amplitudes of hints were modulated by the status of problem only in the right hemisphere but not in the left hemisphere. Our results confirmed the hypothesis that failure to solve problems would trigger the perseverance processes in right hemisphere, which would make right hemisphere more sensitive to related information of failure problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many current recognition systems use constrained search to locate objects in cluttered environments. Previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features, if all the data is known to come from a sinlge object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a "good enough" interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. In this paper, we turn to the problem of selecting models from a library, and examine the combinatorics of determining that a candidate object is not present in the data. We show that the expected search is again exponential, implying that naﶥ approaches to indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to week out each of the incorrect models. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of nonlinear multivariate root finding. In an earlier paper we described a system called Newton which finds roots of systems of nonlinear equations using refinements of interval methods. The refinements are inspired by AI constraint propagation techniques. Newton is competative with continuation methods on most benchmarks and can handle a variety of cases that are infeasible for continuation methods. This paper presents three "cuts" which we believe capture the essential theoretical ideas behind the success of Newton. This paper describes the cuts in a concise and abstract manner which, we believe, makes the theoretical content of our work more apparent. Any implementation will need to adopt some heuristic control mechanism. Heuristic control of the cuts is only briefly discussed here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Passive monitoring of large sites typically requires coordination between multiple cameras, which in turn requires methods for automatically relating events between distributed cameras. This paper tackles the problem of self-calibration of multiple cameras which are very far apart, using feature correspondences to determine the camera geometry. The key problem is finding such correspondences. Since the camera geometry and photometric characteristics vary considerably between images, one cannot use brightness and/or proximity constraints. Instead we apply planar geometric constraints to moving objects in the scene in order to align the scene"s ground plane across multiple views. We do not assume synchronized cameras, and we show that enforcing geometric constraints enables us to align the tracking data in time. Once we have recovered the homography which aligns the planar structure in the scene, we can compute from the homography matrix the 3D position of the plane and the relative camera positions. This in turn enables us to recover a homography matrix which maps the images to an overhead view. We demonstrate this technique in two settings: a controlled lab setting where we test the effects of errors in internal camera calibration, and an uncontrolled, outdoor setting in which the full procedure is applied to external camera calibration and ground plane recovery. In spite of noise in the internal camera parameters and image data, the system successfully recovers both planar structure and relative camera positions in both settings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report presents a system for generating a stable, feasible, and reachable grasp of a polyhedral object. A set of contact points on the object is found that can result in a stable grasp; a feasible grasp is found in which the robot contacts the object at those contact points; and a path is constructed from the initial configuration of the robot to the stable, feasible final grasp configuration. The algorithm described in the report is designed for the Salisbury hand mounted on a Puma 560 arm, but a similar approach could be used to develop grasping systems for other robots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses the problem of recognizing solid objects in the three-dimensional world, using two-dimensional shape information extracted from a single image. Objects can be partly occluded and can occur in cluttered scenes. A model based approach is taken, where stored models are matched to an image. The matching problem is separated into two stages, which employ different representations of objects. The first stage uses the smallest possible number of local features to find transformations from a model to an image. This minimizes the amount of search required in recognition. The second stage uses the entire edge contour of an object to verify each transformation. This reduces the chance of finding false matches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The STUDENT problem solving system, programmed in LISP, accepts as input a comfortable but restricted subset of English which can express a wide variety of algebra story problems. STUDENT finds the solution to a large class of these problems. STUDENT can utilize a store of global information not specific to any one problem, and may make assumptions about the interpretation of ambiguities in the wording of the problem being solved. If it uses such information or makes any assumptions, STUDENT communicates this fact to the user. The thesis includes a summary of other English language questions-answering systems. All these systems, and STUDENT, are evaluated according to four standard criteria. The linguistic analysis in STUDENT is a first approximation to the analytic portion of a semantic theory of discourse outlined in the thesis. STUDENT finds the set of kernel sentences which are the base of the input discourse, and transforms this sequence of kernel sentences into a set of simultaneous equations which form the semantic base of the STUDENT system. STUDENT then tries to solve this set of equations for the values of requested unknowns. If it is successful it gives the answers in English. If not, STUDENT asks the user for more information, and indicates the nature of the desired information. The STUDENT system is a first step toward natural language communication with computers. Further work on the semantic theory proposed should result in much more sophisticated systems.