949 resultados para Cadastral updating
Resumo:
In this paper,the traditional relational model is extended in order to express indefinite and maybe information. On the basis of the extended relational model.,the foundamental operations in relational algebra are defined again,and the policy and algorithm for updating relational database are given.
Resumo:
本文从空值语义及更新操作的关系出发,提出了一种新的扩展关系模型,用以组织更新操作下的含有空值的关系数据库中的信息.同时,定义了这种模型下的基本关系代数运算.为实现空值环境下关系数据库的数据更新奠定了基础。
Resumo:
本文的这部分是以扩展模型为基础,以五种运算为工具,对空值环境下关系数据库的更新处理策略做了深入研究,并给出了相应的算法及算法分析。
Resumo:
This dissertation starts from the point that the prestack time migration can been considered as an approximation of the prestack depth migration, giving a wave equation based prestack time migration approach. The new approach includes: analytically getting the travel time and amplitude based on the one way wave equation and the stationary-phase theory, using ‘spread’ imaging method and imaging following the prestack depth migration, updating the velocity model with respect to the flats of the events in CRP gathers. Based on this approach, we present a scheme that can image land seismic data without field static correction. We may determine the correct near surface velocities and stack velocities by picking up the residual correction of the events in the CRP gathers. We may get the rational migration section based on the updated velocities and correct the migration section from a floating datum plane to a universal datum plane. We may adaptively determine the migration aperture according to the dips of the imaging structures. This not only speed up the processing, but may suppress the migration noise produce by the extra aperture. We adopt the deconvolution imaging condition of wave equation migration. It may partially compensate the geometric divergence. In this scheme, we use the table-driven technique which may enhance the computational efficiency. If the subsurface is much more complicated, it may be impossible to distinguish the DTS curve. To solve this problem, we proposed a technique to determine the appropriate range of the DTS curve. We synthesize DTS panel in this range using different velocities and depths, and stack the amplitude around the zero time. Determine the correct velocity and location of the considered grid point by comparing the values.
Resumo:
On the issue of geological hazard evaluation(GHE), taking remote sensing and GIS systems as experimental environment, assisting with some programming development, this thesis combines multi-knowledges of geo-hazard mechanism, statistic learning, remote sensing (RS), high-spectral recognition, spatial analysis, digital photogrammetry as well as mineralogy, and selects geo-hazard samples from Hong Kong and Three Parallel River region as experimental data, to study two kinds of core questions of GHE, geo-hazard information acquiring and evaluation model. In the aspect of landslide information acquiring by RS, three detailed topics are presented, image enhance for visual interpretation, automatic recognition of landslide as well as quantitative mineral mapping. As to the evaluation model, the latest and powerful data mining method, support vector machine (SVM), is introduced to GHE field, and a serious of comparing experiments are carried out to verify its feasibility and efficiency. Furthermore, this paper proposes a method to forecast the distribution of landslides if rainfall in future is known baseing on historical rainfall and corresponding landslide susceptibility map. The details are as following: (a) Remote sensing image enhancing methods for geo-hazard visual interpretation. The effect of visual interpretation is determined by RS data and image enhancing method, for which the most effective and regular technique is image merge between high-spatial image and multi-spectral image, but there are few researches concerning the merging methods of geo-hazard recognition. By the comparing experimental of six mainstream merging methods and combination of different remote sensing data source, this thesis presents merits of each method ,and qualitatively analyzes the effect of spatial resolution, spectral resolution and time phase on merging image. (b) Automatic recognition of shallow landslide by RS image. The inventory of landslide is the base of landslide forecast and landslide study. If persistent collecting of landslide events, updating the geo-hazard inventory in time, and promoting prediction model incessantly, the accuracy of forecast would be boosted step by step. RS technique is a feasible method to obtain landslide information, which is determined by the feature of geo-hazard distribution. An automatic hierarchical approach is proposed to identify shallow landslides in vegetable region by the combination of multi-spectral RS imagery and DEM derivatives, and the experiment is also drilled to inspect its efficiency. (c) Hazard-causing factors obtaining. Accurate environmental factors are the key to analyze and predict the risk of regional geological hazard. As to predict huge debris flow, the main challenge is still to determine the startup material and its volume in debris flow source region. Exerting the merits of various RS technique, this thesis presents the methods to obtain two important hazard-causing factors, DEM and alteration mineral, and through spatial analysis, finds the relationship between hydrothermal clay alteration minerals and geo-hazards in the arid-hot valleys of Three Parallel Rivers region. (d) Applying support vector machine (SVM) to landslide susceptibility mapping. Introduce the latest and powerful statistical learning theory, SVM, to RGHE. SVM that proved an efficient statistic learning method can deal with two-class and one-class samples, with feature avoiding produce ‘pseudo’ samples. 55 years historical samples in a natural terrain of Hong Kong are used to assess this method, whose susceptibility maps obtained by one-class SVM and two-class SVM are compared to that obtained by logistic regression method. It can conclude that two-class SVM possesses better prediction efficiency than logistic regression and one-class SVM. However, one-class SVM, only requires failed cases, has an advantage over the other two methods as only "failed" case information is usually available in landslide susceptibility mapping. (e) Predicting the distribution of rainfall-induced landslides by time-series analysis. Rainfall is the most dominating factor to bring in landslides. More than 90% losing and casualty by landslides is introduced by rainfall, so predicting landslide sites under certain rainfall is an important geological evaluating issue. With full considering the contribution of stable factors (landslide susceptibility map) and dynamic factors (rainfall), the time-series linear regression analysis between rainfall and landslide risk mapis presented, and experiments based on true samples prove that this method is perfect in natural region of Hong Kong. The following 4 practicable or original findings are obtained: 1) The RS ways to enhance geo-hazards image, automatic recognize shallow landslides, obtain DEM and mineral are studied, and the detailed operating steps are given through examples. The conclusion is practical strongly. 2) The explorative researching about relationship between geo-hazards and alteration mineral in arid-hot valley of Jinshajiang river is presented. Based on standard USGS mineral spectrum, the distribution of hydrothermal alteration mineral is mapped by SAM method. Through statistic analysis between debris flows and hazard-causing factors, the strong correlation between debris flows and clay minerals is found and validated. 3) Applying SVM theory (especially one-class SVM theory) to the landslide susceptibility mapping and system evaluation for its performance is also carried out, which proves that advantages of SVM in this field. 4) Establishing time-serial prediction method for rainfall induced landslide distribution. In a natural study area, the distribution of landslides induced by a storm is predicted successfully under a real maximum 24h rainfall based on the regression between 4 historical storms and corresponding landslides.
Resumo:
The ionospheric parameter M(3000)F2 (the so-called transmission factor or the propagation factor) is important not only in practical applications such as frequency planning for radio-communication but also in ionospheric modeling. This parameter is strongly anti-correlated with the ionospheric F2-layer peak height hmF2,a parameter often used as a key anchor point in some widely used empirical models of the ionospheric electron density profile (e.g., in IRI and NeQuick models). Since hmF2 is not easy to obtain from measurements and M(3000)F2 can be routinely scaled from ionograms recorded by ionosonde/digisonde stations distributed globally and its data has been accumulated for a long history, usually the value of hmF2 is calculated from M(3000)F2 using the empirical formula connecting them. In practice, CCIR M(3000)F2 model is widely used to obtain M(3000)F2 value. However, recently some authors found that the CCIR M(3000)F2 model has remarkable discrepancies with the measured M(3000)F2, especially in low-latitude and equatorial regions. For this reason, the International Reference Ionosphere (IRI) research community proposes to improve or update the currently used CCIR M(3000)F2 model. Any efforts toward the improvement and updating of the current M(3000)F2 model or newly development of a global hmF2 model are encouraged. In this dissertation, an effort is made to construct the empirical models of M(3000)F2 and hmF2 based on the empirical orthogonal function (EOF) analysis combined with regression analysis method. The main results are as follows: 1. A single station model is constructed using monthly median hourly values of M(3000)F2 data observed at Wuhan Ionospheric Observatory during the years of 1957–1991 and compared with the IRI model. The result shows that EOF method is possible to use only a few orders of EOF components to represent most of the variance of the original data set. It is a powerful method for ionospheric modeling. 2. Using the values of M(3000)F2 observed by ionosondes distributed globally, data at grids uniformly distributed globally were obtained by using the Kriging interpolation method. Then the gridded data were decomposed into EOF components using two different coordinates: (1) geographical longitude and latitude; (2) modified dip (Modip) and local time. Based on the EOF decompositions of the gridded data under these two coordinates systems, two types of the global M(3000)F2 model are constructed. Statistical analysis showed that the two types of the constructed M(3000)F2 model have better agreement with the observational M(3000)F2 than the M(3000)F2 model currently used by IRI. The constructed models can represent the global variations of M(3000)F2 better. 3. The hmF2 data used to construct the hmF2 model were converted from the observed M(3000)F2 based on the empirical formula connecting them. We also constructed two types of the global hmF2 model using the similar method of modeling M(3000)F2. Statistical analysis showed that the prediction of our models is more accurate than the model of IRI. This demonstrated that using EOF analysis method to construct global model of hmF2 directly is feasible. The results in this thesis indicate that the modeling technique based on EOF expansion combined with regression analysis is very promising when used to construct the global models of M(3000)F2 and hmF2. It is worthwhile to investigate further and has the potential to be used to the global modeling of other ionospheric parameters.
Resumo:
In the practical seismic profile multiple reflections tend to impede the task of even the experienced interpreter in deducing information from the reflection data. Surface multiples are usually much stronger, more broadband, and more of a problem than internal multiples because the reflection coefficient at the water surface is much larger than the reflection coefficients found in the subsurface. For this reason most attempts to remove multiples from marine data focus on surface multiples, as will I. A surface-related multiple attenuation method can be formulated as an iterative procedure. In this essay a fully data-driven approach which is called MPI —multiple prediction through inversion (Wang, 2003) is applied to a real marine seismic data example. This is a pretty promising scheme for predicting a relative accurate multiple model by updating the multiple model iteratively, as we usually do in a linearized inverse problem. The prominent characteristic of MPI method lie in that it eliminate the need for an explicit surface operator which means it can model the multiple wavefield without any knowledge of surface and subsurface structures even a source signature. Another key feature of this scheme is that it can predict multiples not only in time but also in phase and in amplitude domain. According to the real data experiments it is shown that this scheme for multiple prediction can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration. In the other core step which is multiple subtraction we use an expanded multi-channel matching filter to fulfil this aim. Compared to a normal multichannel matching filter where an original seismic trace is matched by a group of multiple-model traces, in EMCM filter a seismic trace is matched by not only a group of the ordinary multiple-model traces but also their adjoints generated mathematically. The adjoints of a multiple-model trace include its first derivative, its Hilbert transform and the derivative of the Hilbert transform. The third chapter of the thesis is the application for the real data using the previous methods we put forward from which we can obviously find the effectivity and prospect of the value in use. For this specific case I have done three group experiments to test the effectiveness of MPI method, compare different subtraction results with fixed filter length but different window length, invest the influence of the initial subtraction result for MPI method. In terms of the real data application, we do fine that the initial demultiple estimate take on a great deal of influence for the MPI method. Then two approaches are introduced to refine the intial demultiple estimate which are first arrival and masking filter respectively. In the last part some conclusions are drawn in terms of the previous results I have got.
Resumo:
Multi-waves and multi-component get more and more attentions from oil industry. On the basis of existent research results, My research focuses on some key steps of OBC 4C datum processing. OBC datum must be preprocessed quite well for getting a good image. We show a flow chart of preprocess including attenuation of noise on multi-component datum、elimination ghost by summing P and Z and rotation of horizontal components. This is a good foundation for the coming steps about OBC processing. How to get exact converted point location and to analyze velocity are key points in processing reflection seismic converted wave data. This paper includes computing converted point location, analyzing velocity and nonhyperbolic moveout about converted waves. Anisotropic affects deeply the location of converted wave and the nonhyperbolic moveout. Supposed VTI, we research anisotropic effect on converted wave location and the moveout. Since Vp/Vs is important, we research the compute method of Vp/Vs from post-stack data and pre-stack data. It is a part of the paper that inversing anisotropic parameter by traveltime. Pre-stack time migration of converted wave is an focus, using common-offset Kirchhoff migration, we research the velocity model updating in anisotropic media. I have achieved the following results: 1) using continued Fractions, we proposed a new converted point approximate equation, when the offset is long enough ,the thomsen’s 2 order equation can’t approximate to the exact location of converted point, our equation is a good approximate for the exact location. 2) our new methods about scanning nonhyperbolic velocity and Vp/Vs can get a high quality energy spectrum. And the new moveout can fit the middle and long offset events. Processing the field data get a good result. 3) a new moveout equation, which have the same form as Alkhalifah’s long offset P wave moveout equation, have the same degree preciseness as thomsen’s moveout equation by testing model data. 4) using c as a function of the ratio offset to depth, we can uniform the Li’s and thomsen’s moveout equation in a same equation, the model test tell us choice the reasonable function C can improve the exact degree of Li’s and thomsen’s equation. 5) using traveltime inversion ,we can get anisotropic parameter, which can help to flat the large offset event and propose a model of anisotropic parameter which will useful for converted wave pre-stack time migration in anisotropic media. 6)using our pre-stack time migration method and flow, we can update the velocity model and anisotropic parameter model then get good image. Key words: OBC, Common converted Point (CCP), Nonhyperbolic moveout equation, Normal moveout correction, Velocity analysis, Anisotropic parameters inversion, Kirchhoff anisotropic pre-stack time migration, migration velocity model updating
Resumo:
On the subject of oil and gas exploration, migration is an efficacious technique for imagining structures underground. Wave-equation migration (WEM) dominates over other migration methods in accuracy, despite of higher computational cost. However, the advantages of WEM will emerge as the progress of computer technology. WEM is sensitive to velocity model more than others. Small velocity perturbations result in grate divergence in the image pad. Currently, Kirrchhoff method is still very popular in the exploration industry for the reason of difficult to provide precise velocity model. It is very urgent to figure out a way to migration velocity modeling. This dissertation is mainly devoted to migration velocity analysis method for WEM: 1. In this dissertation, we cataloged wave equation prestack depth migration. The concept of migration is introduced. Then, the analysis is applied to different kinds of extrapolate operator to demonstrate their accuracy and applicability. We derived the DSR and SSR migration method and apply both to 2D model. 2. The output of prestack WEM is in form of common image gathers (CIGs). Angle domain common image gathers (ADCIGs) gained by wave equation are proved to be free of artifacts. They are also the most potential candidates for migration velocity analysis. We discussed how to get ADCIGs by DSR and SSR, and obtained ADCIGs before and after imaging separately. The quality of post stack image is affected by CIGs, only the focused or flattened CIGs generate the correct image. Based on wave equation migration, image could be enhanced by special measures. In this dissertation we use both prestack depth residual migration and time shift imaging condition to improve the image quality. 3. Inaccurate velocities lead to errors of imaging depth and curvature of coherent events in CIGs. The ultimate goal of migration velocity analysis (MVA) is to focus scattered event to correct depth and flatten curving event by updating velocities. The kinematic figures are implicitly presented by focus depth aberration and kinetic figure by amplitude. The initial model of Wave-equation migration velocity analysis (WEMVA) is the output of RMO velocity analysis. For integrity of MVA, we review RMO method in this dissertation. The dissertation discusses the general ideal of RMO velocity analysis for flat and dipping events and the corresponding velocity update formula. Migration velocity analysis is a very time consuming work. Respect to computational convenience, we discus how RMO works for synthetic source record migration. In some extremely situation, RMO method fails. Especially in the areas of poorly illuminated or steep structure, it is very difficult to obtain enough angle information for RMO. WEMVA based on wave extrapolate theory, which successfully overcome the drawback of ray based methods. WEMVA inverses residual velocities with residual images. Based on migration regression, we studied the linearized scattering operator and linearized residual image. The key to WEMVA is the linearized residual image. Residual image obtained by Prestack residual migration, which based on DSR is very inefficient. In this dissertation, we proposed obtaining residual migration by time shift image condition, so that, WEMVA could be implemented by SSR. It evidently reduce the computational cost for this method.
Resumo:
In exploration seismology, the geologic target of oil and gas reservoir in complex medium request the high accuracy image of the structure and lithology of the medium. So the study of the prestack image and the elastic inversion of seismic wave in the complex medium come to the leading edge. The seismic response measured at the surface carries two fundamental pieces of information: the propagation effects of the medium and the reflections from the different layer boundaries in the medium. The propagation represent the low-wavenumber component of the medium, it is so-called the trend or macro layering, whereas the reflections represent the high-wavenumber component of the medium, it is called the detailed or fine layering. The result of migration velocity analysis is the resolution of the low-wavenumber component of the medium, but the prestack elastic inversion provided the resolution of the high-wavvenumber component the medium. In the dissertation, the two aspects about the migration velocity estimation and the elastic inversion have been studied.Firstly, any migration velocity analysis methods must include two basic elements: the criterion that tell us how to know whether the model parameters are correct and the updating that tell us how to update the model parameters when they are incorrect, which are effected on the properties and efficiency of the velocity estimation method. In the dissertation, a migration velocity analysis method based on the CFP technology has been presented in which the strategy of the top-down layer stripping approach are adapted to avoid the difficult of the selecting reduce .The proposed method has a advantage that the travel time errors obtained from the DTS panel are defined directly in time which is the difference with the method based on common image gather in which the residual curvature measured in depth should be converted to travel time errors.In the proposed migration velocity analysis method, the four aspects have been improved as follow:? The new parameterization of velocity model is provided in which the boundaries of layers are interpolated with the cubic spline of the control location and the velocity with a layer may change along with lateral position but the value is calculated as a segmented linear function of the velocity of the lateral control points. The proposed parameterization is suitable to updating procedure.? The analytical formulas to represent the travel time errors and the model parameters updates in the t-p domain are derived under local lateral homogeneous. The velocity estimations are iteratively computed as parametric inversion. The zero differential time shift in the DTS panel for each layer show the convergence of the velocity estimation.? The method of building initial model using the priori information is provided to improve the efficiency of velocity analysis. In the proposed method, Picking interesting events in the stacked section to define the boundaries of the layers and the results of conventional velocity analysis are used to define the velocity value of the layers? An interactive integrate software environment with the migration velocity analysis and prestack migration is built.The proposed method is firstly used to the synthetic data. The results of velocity estimation show both properties and efficiency of the velocity estimation are very good.The proposed method is also used to the field data which is the marine data set. In this example, the prestack and poststack depth migration of the data are completed using the different velocity models built with different method. The comparison between them shows that the model from the proposed method is better and improves obviously the quality of migration.In terms of the theoretical method of expressing a multi-variable function by products of single-variable functions which is suggested by Song Jian (2001), the separable expression of one-way wave operator has been studied. A optimization approximation with separable expression of the one-way wave operator is presented which easily deal with the lateral change of velocity in space and wave number domain respectively and has good approach accuracy. A new prestack depth migration algorithm based on the optimization approximation separable expression is developed and used to testing the results of velocity estimation.Secondly, according to the theory of the seismic wave reflection and transmission, the change of the amplitude via the incident angle is related to the elasticity of medium in the subsurface two-side. In the conventional inversion with poststack datum, only the information of the reflection operator at the zero incident angles can be used. If the more robust resolutions are requested, the amplitudes of all incident angles should be used.A natural separable expression of the reflection/transmission operator is represented, which is the sum of the products of two group functions. One group function vary with phase space whereas other group function is related to elastic parameters of the medium and geological structure.By employing the natural separable expression of the reflection/transmission operator, the method of seismic wave modeling with the one-way wave equation is developed to model the primary reflected waves, it is adapt to a certain extent heterogeneous media and confirms the accuracy of AVA of the reflections when the incident angle is less than 45'. The computational efficiency of the scheme is greatly high.The natural separable expression of the reflection/transmission operator is also used to construct prestack elastic inversion algorithm. Being different from the AVO analysis and inversion in which the angle gathers formed during the prstack migration are used, the proposed algorithm construct a linear equations during the prestack migration by the separable expression of the reflection/transmission operator. The unknowns of the linear equations are related to the elasticity of the medium, so the resolutions of them provided the elastic information of the medium.The proposed method of inversion is the same as AVO inversion in , the difference between them is only the method processing the amplitude via the incident angle and computational domain.
Resumo:
Eight experiments tested how object array structure and learning location influenced the establishing and utilization of self-to-object and object-to-object spatial representations in locomotion and reorientation. In Experiment 1 to 4, participants learned either at the periphery of or amidst regular or irregular object array, and then pointed to objects while blindfolded in three conditions: before turning (baseline), after rotating 240 degrees (updating), and after disorientation (disorientation). In Experiment 5 to 8, participants received instruction to keep track of self-to-object or object-to-object spatial representations before rotation. In each condition, the configuration error, which means the standard deviation of the means per target object of the signed pointing errors, was calculated as the index of the fidelity of representation used in each condition. Results indicate that participants form both self-to-object and object-to-object spatial representations after learning an object-array. Object-array structure influences the selection of representation during updating. By default, object-to-object spatial representation is updated when people learned the regular object-array structure, and self-to-object spatial representation is updated when people learned the irregular object array. But people could also update the other representation when they are required to do so. The fidelity of representations will confine this kind of “switch”. People could only “switch” from a low fidelity representation to a high fidelity representation or between two representations of similar fidelity. They couldn’t “switch” from a high fidelity representation to a low fidelity representation. Leaning location might influence the fidelity of representations. When people learned at the periphery of object array, they could acquire both self-to-object and object-to-object spatial representations of high fidelity. But when people learned amidst the object array, they could only acquire self-to-object spatial representation of high fidelity, and the fidelity of object-to-object spatial representation was low.
Resumo:
In the present study, based on processing efficiency theory, we used the event-related potentials (ERP) and functional magnetic resonance image (fMRI) techniques to explore the underlying neutral mechanism of influences of negative emotion on three subsystems of working memory, phonological loop、 visuospatial sketh pad and the central executive. The modified DSMT (delayed matching-to-sample task) and n-back tasks were adopted and IAPS (International Affective Picture System) pictures were employed to induce the expected emotional state of subjects. The main results and conclusions obtained in the series of experiments are as the following: 1. In DSM tasks, we found P200 and P300 were reduced by negative emotion in both spatial and verbal tasks, however the increased negative slow wave were only observed in spatial tasks, not in verbal tasks. 2. In n-back tasks, the updating function of WM associated P300 was affected by negative emotion only in spatial tasks, not in verbal tasks. Current density analysis revealed strong current density in the fronto-parietal cortex only in the spatial tasks as well. 3. We adopted fMRI-block design and ROIs analysis, and found significant emotion and task effects in spatial WM-associated right superior parietal cortex; only emotion effect in verbal WM-associated Broca’s area; the interaction effect in attention-associated medial prefrontal area and bilateral inferior parietal cortex. These results implied the negative emotion mainly disturbed the spatial WM-related areas, and the attention control system play a key role in the interaction of spatial WM and negative emotion. 4. to further examine the effects of positive、negative and neutral emotion on tasks with different cognitive loads, the selective effect of emotion on the ERP components of spatial WM was only found in 2-back tasks, not in visual searching tasks. So, firstly the positive emotion as well as negative emotion selectively disturbed on spatial WM in light of the attention resource competition mechanism. Secondly, the selective influences based on the different WM systems, not the properties of spatial and verbal information. At last, the manner of the interaction of emotion and cognition is correlated with the cognitive load.
Resumo:
Currently,one of the important research areas in Spatial updating is the role of external (for instance visual) and internal (for instance proprioceptive or vestibular) information in spatial updating of scene recognition. Our study uses the paradigm of classic spatial updating research and the experimental design of investigation of Burgess(2004),first, we will explore the concrete influence of locomotion on scene recognition in real world; next, we will use virtual reality technology, which can control many spatial learning parameters and exclude the influence of extra irrelevant variables, to explore the influence of pure locomotion without visual cue on scene recognition, and furthermore, we will explore whether the ability of spatial updating can be transferred to new situations in a short period of time and compare the result pattern in real word with that in virtual reality to test the validity of virtual reality technology in spatial updating of scene recognition research. The main results of this paper can be summarized as follows: 1. In real world, we found two effects: the spatial updating effect and the viewpoint dependent effect, this result indicated that the spatial updating effect based on locomotion does not eliminate the viewpoint dependent effect during the scene recognition process in physical environment. 2. In virtual reality environment, we still found two effects: the spatial updating effect and the viewpoint dependent effect, this result showed us that the spatial updating effect based on locomotion does not eliminate the viewpoint dependent effect during the scene recognition process in virtual reality environment either. 3. The spatial updating effect based on locomotion plays double role in scene recognition: When subjects were tested in different viewpoint, spatial updating based on locomotion promoted scene recognition; while subjected were tested in same viewpoint, spatial updating based on locomotion had a negative influence on scene recognition, these results show us that spatial updating based on locomotion is automated and can not be ignored. 4. The ability of spatial updating can be transferred to new situations in a short period of time , and the experiment in the immersed virtual reality environment got the same result pattern with that in the physical environment, suggesting VR technology is a very effective method to do research on spatial updating of the scene recognition studies. 5. This study about scene recognition provides evidence to double system model of spatial updating in the immersed virtual reality environment.
Resumo:
The nature of individual differences among children is an important issue in the study of human intelligence. There are close relation between intelligence and executive functions. Traditional theories, which are based mainly on the factor analysis, approach the problem only from the perspective of psychometrics. However, they do not study the relation of cognition and neurobiology. Some researchers try to explore the essential differences in intelligence from the basic cognitive level, by studying the relationship between executive function and intelligence. The aim of this study was to do the followings 1) to delineate and separate the executive function in children into measurable constructs; 2) to establish the relationship between executive function and intelligence in children; 3) to find out the difference and its neural mechanism between intellectually-gifted and normal children’s executive function. The participants were 188 children aged 7-12 year old. There were 6 executive function tasks. The results were follows: 1) The latent variables analyses showed that there was no stable construct of executive function in 7-10 year old children. The executive function construct of 11-12 year old children could be separated into updating, inhibition and shifting. And they had grown to be more or less the same as adults in the executive function. There were only moderate correlations between the three types of executive function, but they were largely independent of each other. 2) The correlations between the indices of updating, inhibition, shifting and intelligence were different in 7-12 year old children. The older the age, the more the indices were related to intelligence. The updating and shifting were related to intelligence in 7-10 year old children. There were significant correlations between the updating, inhibition, shifting and intelligence in 11-12 year old children. The correlation between updating and intelligence was higher than the correlation between shifting and intelligence. Furthermore, in structural equation models controlling for the three executive functions correlations, updating was highly related to intelligence, but the relations of inhibition and shifting to intelligence were not significant. 3) Intellectually-gifted children performed better than normal children in executive function tasks. The neural mechanism differences between intellectually gifted and average children were indicated by ERP component P3. The present study helps us to understand the relationship between intelligence and executive function; and throws light on the issue of individual differences in intelligence. The present results can provide theoretical support for the development a culture-free intelligence test and a method to promote the development of intelligence. Our present study lends support to the neural efficient hypothesis.
Resumo:
Geoprocessamento. Sistema de informações geográficas - SIG / SPRING. Esquema operacional. Criação e manipulação do bancos de dados. Criação do plano de informação - PI. Escolha do modelo de dados. Imagens. Modelo Numérico de Terreno - MNT. Grades regulares. Fatiamento. Objeto. Mapa cadastral. Redes. Temática.