833 resultados para attribute grammars
Resumo:
With the deepening development of oil-gas exploration and the sharp rise in costs, modern seismic techniques had been progressed rapidly. The Seismic Inversion Technique extracts seismic attribute from the seismic reflection data, inverses the underground distribution of wave impedance or speed, estimates reservoir parameters, makes some reservoir prediction and oil reservoir description as a key technology of Seismic exploration, which provides a reliable basic material for oil-gas exploration. Well-driven SI is essentially an seismic-logging joint inversion. The low, high-frequency information comes from the logging information, while the structural characteristics and medium frequency band depend on the seismic data. Inversion results mainly depend on the quality of raw data, the rationality of the process, the relativity of synthetic and seismic data, etc. This paper mainly research on how the log-to-seismic correlation have affected the well-driven seismic inversion precision. Synthetic, the comparison between middle –frequency borehole impedance and relative seismic impedance and well-attribute crossplots have been taken into account the log-to-seismic correlation. The results verify that the better log-to-seismic correlation, the more reliable the seismic inversion result, through the analysis of three real working area (Qikou Sag, Qiongdongnan basin, Sulige gas field).
Resumo:
When used in the determining the total electron content (TEC), which may be the most important ionospheric parameter, the worldwide GPS observation brings a revolutionary change in the ionospheric science. There are three steps in the data processing to retrieve GPS TEC: (1) to estimate slant TEC from the measurements of GPS signals; (2) to map the slant TEC into vertical; and (3) to interpolate the vertical TEC into grid points. In this scientific dissertation we focus our attention on the second step, the mapping theory and method to convert slant TEC into vertical. This is conventionally done by multiplying on the slant TEC a mapping function which is usually determined by certain models of electron density profile. Study of the vertical TEC mapping function is of significance in GPS TEC measurement. This paper first reviews briefly the three steps in GPS TEC mapping process. Then we compare the vertical TEC mapping function which were respectively calculated from the electron density profiles of the ionospheric model and retrieved from the observation of worldwide GPS TEC. We also perform the statistical analysis on the observational mapping functions. The main works and results are as follows: 1. We calculated the vertical TEC mapping functions for both SLM and Chapman models, and discussed the modulation of the ionosphere height to the mapping functions. We use two simple models, single layer model (SLM) and Chapman models, of the ionospheric electron density profiles to calculate the vertical TEC mapping function. In the case of the SLM, we discuss the control of the ionospheric altitude, i.e., the layer height hipp, to the mapping function. We find that the mapping function decreases rapidly as hipp increases. For the Chapman model we study also the control mapping function by both ionospheric altitude indicated by the peak electron density height hmF2, and the scale height, H, which present the thickness of the ionosphere. It is also found that the mapping function decreases rapidly as hmF2 increases. and it also decreases as H increases. 2. Then we estimate the mapping functions from the GPS observations and compare them with those calculated from the electron density models. We first, proposed a new method to estimate the mapping functions from GPS TEC data. This method is then used to retrieve the observational mapping function from both the slant TEC (TECS) provided by International GPS Service (IGS)and vertical TEC provide by JPL Global Ionospheric Maps (GIMs). Then we compare the observational mapping function with those calculated from the electron density models, SLM and Chapman. We find that the values of the observational mapping functions are much smaller than that from the model mapping functions, when the zenith angle is large enough. We attribute this to the effect of the plasmasphere which is above about 1000 km. 3. We statistically analyze the observational mapping functions and reveal their climatological changes. Observational mapping functions during 1999-2007 are used in our statistics. The main results are as follows. (1) The observational mapping functions decrease obviously with the decrement of the solar activity which is represented by the F10.7 index; (2) In annual variations of the observational mapping functions, the semiannual component is found at low-latitudes, and the remarkable seasonal variations at mid- and high-latitudes. (3) The diurnal variation of the observational mapping functions is that they are large in daytime and small at night, they become extremely small in the early morning before sunrise. (4) The observational mapping functions change with latitudes that they are smaller at lower latitudes and larger at higher. All of the above variations of the observational mapping functions are explained by the existence of the plasmasphere, which changes more slowly with time and more rapidly with latitude than the ionosphere does . In summary, our study on the vertical TEC mapping function imply that the ionosphere height has a modulative effect on the mapping function. We first propose the concept of the 'observational mapping functions' , and provide a new method to calculate them. This is important in improving the TEC mapping. It may also possible to retrieving the plasmaspheric information from GPS observations.
Resumo:
Fractured oil and gas reservoir is an important type of oil and gas reservoir, which is taking a growing part of current oil and gas production in the whole world. Thus these technologies targeted at exploration of fractured oil and gas reservoirs are drawing vast attentions. It is difficult to accurately predict the fracture development orientation and intensity in oil and gas exploration. Focused on this problem, this paper systematically conducted series study of seismic data processing and P-wave attributes fracture detection based on the structure of ZX buried mountain, and obtained good results. This paper firstly stimulated the propagation of P-wave in weak anisotropic media caused by vertical aligned cracks, and analyzed the rule of P-wave attributes’ variation associated with observed azimuth, such as travel-time, amplitude and AVO gradient and so on, and quantitatively described the sensitive degree of these attributes to anisotropy of fracture medium. In order to further study the sensitive degree of these attributes to anisotropy of fractures, meanwhile, this paper stimulated P-wave propagation through different types and different intensity anisotropic medium respectively and summarized the rule of these attributes’ variation associated with observed azimuth in different anisotropic medium. The results of these studies provided reliable references for predicting orientation, extensity and size of actual complicated cracked medium by P-wave azimuth attributes responses. In the paper, amounts of seismic data processing methods are used to keep and recover all kinds of attributes applied for fracture detection, which guarantee the high accurate of these attributes, thus then improve the accurate of fracture detection. During seismic data processing, the paper adopted the three dimensional F-Kx-Ky field cone filter technique to attenuate ground roll waves and multiple waves, then enhances the S/N ratio of pre-stack seismic data; comprehensively applying geometrical spread compensation, surface consistent amplitude compensation, residual amplitude compensation to recover amplitude; common azimuth processing method effectively preserves the azimuthal characteristics of P-wave attributes; the technique of bend ray adaptive aperture pre-stack time migration insures to obtain the best image in each azimuth. Application of these processing methods guaranteed these attributes’ accuracy, and then improved the accuracy of fracture detection. After comparing and analyzing a variety of attributes, relative wave impedance (relative amplitude) attribute is selected to inverse the orientation of fracture medium; attenuation gradient and corresponding frequency of 85% energy are selected to inverse the intensity of fracture medium; then obtained the fracture distribution characteristics of lower Paleozoic and Precambrian in ZX ancient buried mountains. The results are good accord with the characteristics of faults system and well information in this area.
Resumo:
The seismic survey is the most effective geophysical method during exploration and development of oil/gas. As a main means in processing and interpreting seismic data, impedance inversion takes up a special position in seismic survey. This is because the impedance parameter is a ligament which connects seismic data with well-logging and geological information, while it is also essential in predicting reservoir properties and sand-body. In fact, the result of traditional impedance inversion is not ideal. This is because the mathematical inverse problem of impedance is poor-pose so that the inverse result has instability and multi-result, so it is necessary to introduce regularization. Most simple regularizations are presented in existent literature, there is a premise that the image(or model) is globally smooth. In fact, as an actual geological model, it not only has made of smooth region but also be separated by the obvious edge, the edge is very important attribute of geological model. It's difficult to preserve these characteristics of the model and to avoid an edge too smooth to clear. Thereby, in this paper, we propose a impedance inverse method controlled by hyperparameters with edge-preserving regularization, the inverse convergence speed and result would be improved. In order to preserve the edge, the potential function of regularization should satisfy nine conditions such as basic assumptions edge preservation and convergence assumptions etc. Eventually, a model with clear background and edge-abnormity can be acquired. The several potential functions and the corresponding weight functions are presented in this paper. The potential functionφLφHL andφGM can meet the need of inverse precision by calculating the models. For the local constant planar and quadric models, we respectively present the neighborhood system of Markov random field corresponding to the regularization term. We linearity nonlinear regularization by using half-quadratic regularization, it not only preserve the edge, and but also simplify the inversion, and can use some linear methods. We introduced two regularization parameters (or hyperparameters) λ2 and δ in the regularization term. λ2 is used to balance the influence between the data term and the transcendental term; δ is a calibrating parameter used to adjust the gradient value at the discontinuous position(or formation interface). Meanwhile, in the inverse procedure, it is important to select the initial value of hyperparameters and to change hyperparameters, these will then have influence on convergence speed and inverse effect. In this paper, we roughly give the initial value of hyperparameters by using a trend- curve of φ-(λ2, δ) and by a method of calculating the upper limit value of hyperparameters. At one time, we change hyperparameters by using a certain coefficient or Maximum Likelihood method, this can be simultaneously fulfilled with the inverse procedure. Actually, we used the Fast Simulated Annealing algorithm in the inverse procedure. This method overcame restrictions from the local extremum without depending on the initial value, and got a global optimal result. Meanwhile, we expound in detail the convergence condition of FSA, the metropolis receiving probability form Metropolis-Hasting, the thermal procession based on the Gibbs sample and other methods integrated with FSA. These content can help us to understand and improve FSA. Through calculating in the theoretic model and applying it to the field data, it is proved that the impedance inverse method in this paper has the advantage of high precision practicability and obvious effect.
Resumo:
Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.
Resumo:
Population research is a front area concerned by domestic and overseas, especially its researches on its spatial visualization and its geo-visualization system design, which provides a sound base for understanding and analysis of the regional difference in population distribution and its spatial rules. With the development of GIS, the theory of geo-visualization more and more plays an important role in many research fields, especially in population information visualization, and has been made the big achievements recently. Nevertheless, the current research is less attention paid to the system design for statistical-geo visualization for population information. This paper tries to explore the design theories and methodologies for statistical-geo-visualization system for population information. The researches are mainly focused on the framework, the methodologies and techniques for the system design and construction. The purpose of the research is developed a platform for population atlas by the integration of the former owned copy software of the research group in statistical mapping system. As a modern tool, the system will provide a spatial visual environment for user to analyze the characteristics of population distribution and differentiate the interrelations of the population components. Firstly, the paper discusses the essentiality of geo-visualization for population information and brings forward the key issue in statistical-geo visualization system design based on the analysis of inland and international trends. Secondly, the geo-visualization system for population design, including its structure, functionality, module, user interface design, is studied based on the concepts of theory and technology of geo-visualization. The system design is proposed and further divided into three parts: support layer, technical layer, user layer. The support layer is a basic operation module and main part of the system. The technical layer is a core part of the system, supported by database and function modules. The database module mainly include the integrated population database (comprises spatial data, attribute data and geographical features information), the cartographic symbol library, the color library, the statistical analysis model. The function module of the system consists of thematic map maker component, statistical graph maker component, database management component and statistical analysis component. The user layer is an integrated platform, which provides the functions to design and implement a visual interface for user to query, analysis and management the statistic data and the electronic map. Based on the above, China's E-atlas for population was designed and developed by the integration of the national fifth census data with 1:400 million scaled spatial data. The atlas illustrates the actual development level of the population nowadays in China by about 200 thematic maps relating with 10 map categories(environment, population distribution, sex and age, immigration, nation, family and marriage, birth, education, employment, house). As a scientific reference tool, China's E-atlas for population has already received the high evaluation after published in early 2005. Finally, the paper makes the deep analysis of the sex ratio in China, to show how to use the functions of the system to analyze the specific population problem and how to make the data mining. The analysis results showed that: 1. The sex ratio has been increased in many regions after fourth census in 1990 except the cities in the east region, and the high sex ratio is highly located in hilly and low mountain areas where with the high illiteracy rate and the high poor rate; 2. The statistical-geo visualization system is a powerful tool to handle population information, which can be used to reflect the regional differences and the regional variations of population in China and indicate the interrelations of the population with other environment factors. Although the author tries to bring up a integrate design frame of the statistical-geo visualization system, there are still many problems needed to be resolved with the development of geo-visualization studies.
Resumo:
In current days, many companies have carried out their branding strategies, because strong brand usually provides confidence and reduce risks to its consumers. No matter what a brand is based on tangible products or services, it will possess the common attributes of this category, and it also has its unique attributes. Brand attribute is defined as descriptive features, which are intrinsic characteristics, values or benefits endowed by users of the product or service (Keller, 1993; Romaniuk, 2003). The researches on models of brand multi-attributes are one of the most studied areas of consumer psychology (Werbel, 1978), and attribute weight is one of its key pursuits. Marketing practitioners also paid much attention to evaluations of attributes. Because those evaluations are relevant to the competitiveness and the strategies of promotion and new product development of the company (Green & Krieger, 1995). Then, how brand attributes correlate with weight judgments? And what features the attribute judgment reaction? Especially, what will feature the attribute weight judgment process of consumer who is facing the homogeneity of brands? Enlightened by the lexical hypothesis of researches on personality traits of psychology, this study choose search engine brands as the subject and adopt reaction time, which has been introduced into multi-attributes decision making by many researchers. Researches on independence of affect and cognition and on primacy of affect have cued us that we can categorize brand attributes into informative and affective ones. Meanwhile, Park has gone further to differentiate representative and experiential with functional attributes. This classification reflects the trend of emotion-branding and brand-consumer relationship. Three parts compose the research: the survey to collect attribute words, experiment one on affective primacy and experiment two on correlation between weight judgment and reaction. The results are as follow: In experiment one, we found: (1) affect words are not rated significantly from cognitive attributes, but affect words are responded faster than cognitive ones; (2) subjects comprehend and respond in different ways to functional attribute words and to representative and experiential words. In experiment two, we fund: (1) a significant negative correlation between attributes weight judgment and reaction time; (2) affective attributes will cause faster reaction than cognitive ones; (3) the reaction time difference between functional and representative or experiential attribute is significant, but there is no different between representative and experiential. In sum, we conclude that: (1): In word comprehension and weight judgment, we observed the affective primacy, even when the affect stimulus is presented as meaningful words. (2): The negative correlation between weight judgment and reaction time suggest us that the more important of attribute, the quicker of the reaction. (3): The difference on reaction time of functional, representative and experiential reflects the trend of emotional branding.
Resumo:
The nature of the distinction between conscious and unconscious knowledge is a core issue in the implicit learning field. Furthermore, the phenomenological experience associated with having knowledge is central to the conscious or unconscious status of that knowledge. Consistently, Dienes and Scott (2005) measured the conscious or unconscious status of structure knowledge using subjective measures. Believing that one is purely guessing when in fact one knows indicates unconscious knowledge. But unconscious structural knowledge can also be associated with feelings of intuition or familiarity. In this thesis, we explored whether phenomenological feelings, like familiarity, associated with unconscious structural knowledge could be used, paradoxically, to exert conscious control over the use of the knowledge, and whether people could obtain repetition structure knowledge. We also investigated the neural correlates of awareness of knowing, as measured phenomenologically. In study one, subjects were trained on two grammars and then asked to endorse strings from only one of the grammars. Subjects also rated how familiar each string they felt and reported whether or not they used familiarity to make their grammaticality judgment. We found subjects could endorse the strings of just one grammar and ignore the strings from the other. Importantly, when subjects said they were using familiarity, the rated familiarity for test strings consistent with their chosen grammar was greater than that for strings from the other grammar. Familiarity, subjectively defined, is sensitive to intentions and can play a key role in strategic control. In study two, we manipulated the structural characteristic of stings and explored whether participants could learn repetition structures in the grammatical strings. We measured phenomenology again and also ERPs. Deviant letters of ungrammatical strings violating the repetition structure elicited the N2 component; we took this to be an indication of knowledge, whether conscious or not. Strings which were attributed to conscious categories (rules and recollection) rather than phenomenology associated with unconscious structural knowledge (guessing, intuition and familiarity) elicited the P300 component. Different waveforms provided evidence for the neural correlates of different phenomenologies associated with knowledge of an artificial grammar.
Resumo:
Gender stereotype has a great effect on an individual’s cognition and behavior. Notably, stereotyped cognition about gender and science exerts an influence on an individual’s academic or career choice. In order to weaken the negative effect of gender-science stereotype and facilitate girls’ participation in science, this study examined the development of implicit gender-science stereotype and influence factors with implicit association test and questionnaires in a sample of secondary school students. The present work showed that: Firstly, there were no gender differences and gender predominance in performance of math and physics during secondary school years. However, girls tended to attribute success in math and physics to unstable factors, or the failure to stable factors. The reverse was true for boys’ attribution. This gender difference in attribution was especially evident in their study of physics. Secondly, 7th to 11th grade students implicitly regarded science as male domain, with the exception of 7th grade boys, who thought both boys and girls can study science well. On the whole, this gender-science stereotype was more and more evident as the specialization of science subjects’ progresses through secondary school, and this inclination decreased with increasing grade. Thirdly, the negative correlation between explicit and implicit stereotype which appeared in girls from 8th grade grew stronger with increasing grade and became significant in 10th grade. On the contrary, the significantly positive correlation existed in 7th -11th boys. Fourthly, the experience including attitude toward science, science interests and self –efficacy in math and physics had significantly negative effect on girls’ implicit gender-science stereotype, and significantly positive effect on boys’. It was showed that gender moderated the effect of experience in the study of science and implicit gender-science stereotype, and the attitude toward science mediated the relationship between science interests, self-efficacy and implicit gender-science stereotype. Fifthly, the perceived teacher’s class behaviours by students and the perceived parents’ gender stereotype by children had strong predictive power on students’ implicit gender-science stereotype. And the perceived teachers’ and parents’ performance expectancies can influence gender-science stereotype indirectly through self-efficacy in related subjects and attitude toward science. In conclusion, the present study showed that cognitive bias about gender and science existed in Chinese secondary school students. The information conveyed from teachers and parents interacting with students’ experience in the study of science affect the formation of stereotyped cognition.
Resumo:
This thesis introduces elements of a theory of design activity and a computational framework for developing design systems. The theory stresses the opportunistic nature of designing and the complementary roles of focus and distraction, the interdependence of evaluation and generation, the multiplicity of ways of seeing over the history of a design session versus the exclusivity of a given way of seeing over an arbitrarily short period, and the incommensurability of criteria used to evaluate a design. The thesis argues for a principle based rather than rule based approach to designing documents. The Discursive Generator is presented as a computational framework for implementing specific design systems, and a simple system for arranging blocks according to a set of formal principles is developed by way of illustration. Both shape grammars and constraint based systems are used to contrast current trends in design automation with the discursive approach advocated in the thesis. The Discursive Generator is shown to have some important properties lacking in other types of systems, such as dynamism, robustness and the ability to deal with partial designs. When studied in terms of a search metaphor, the Discursive Generator is shown to exhibit behavior which is radically different from some traditional search techniques, and to avoid some of the well-known difficulties associated with them.
Resumo:
In a recent seminal paper, Gibson and Wexler (1993) take important steps to formalizing the notion of language learning in a (finite) space whose grammars are characterized by a finite number of parameters. They introduce the Triggering Learning Algorithm (TLA) and show that even in finite space convergence may be a problem due to local maxima. In this paper we explicitly formalize learning in finite parameter space as a Markov structure whose states are parameter settings. We show that this captures the dynamics of TLA completely and allows us to explicitly compute the rates of convergence for TLA and other variants of TLA e.g. random walk. Also included in the paper are a corrected version of GW's central convergence proof, a list of "problem states" in addition to local maxima, and batch and PAC-style learning bounds for the model.
Resumo:
In Phys. Rev. Letters (73:2), Mantegna et al. conclude on the basis of Zipf rank frequency data that noncoding DNA sequence regions are more like natural languages than coding regions. We argue on the contrary that an empirical fit to Zipf"s "law" cannot be used as a criterion for similarity to natural languages. Although DNA is a presumably "organized system of signs" in Mandelbrot"s (1961) sense, and observation of statistical featurs of the sort presented in the Mantegna et al. paper does not shed light on the similarity between DNA's "gramar" and natural language grammars, just as the observation of exact Zipf-like behavior cannot distinguish between the underlying processes of tossing an M-sided die or a finite-state branching process.
Resumo:
The use of terms such as “Engineering Systems”, “System of systems” and others have been coming into greater use over the past decade to denote systems of importance but with implied higher complexity than for the term systems alone. This paper searches for a useful taxonomy or classification scheme for complex Systems. There are two aspects to this problem: 1) distinguishing between Engineering Systems (the term we use) and other Systems, and 2) differentiating among Engineering Systems. Engineering Systems are found to be differentiated from other complex systems by being human-designed and having both significant human complexity as well as significant technical complexity. As far as differentiating among various engineering systems, it is suggested that functional type is the most useful attribute for classification differentiation. Information, energy, value and mass acted upon by various processes are the foundation concepts underlying the technical types.
Resumo:
Explanation-based Generalization requires that the learner obtain an explanation of why a precedent exemplifies a concept. It is, therefore, useless if the system fails to find this explanation. However, it is not necessary to give up and resort to purely empirical generalization methods. In fact, the system may already know almost everything it needs to explain the precedent. Learning by Failing to Explain is a method which is able to exploit current knowledge to prune complex precedents, isolating the mysterious parts of the precedent. The idea has two parts: the notion of partially analyzing a precedent to get rid of the parts which are already explainable, and the notion of re-analyzing old rules in terms of new ones, so that more general rules are obtained.
Resumo:
The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This report describes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures, even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm.