960 resultados para topological equivalence
Resumo:
The Second Round of Oil & Gas Exploration needs more precision imaging method, velocity vs. depth model and geometry description on Complicated Geological Mass. Prestack time migration on inhomogeneous media was the technical basic of velocity analysis, prestack time migration on Rugged surface, angle gather and multi-domain noise suppression. In order to realize this technique, several critical technical problems need to be solved, such as parallel computation, velocity algorithm on ununiform grid and visualization. The key problem is organic combination theories of migration and computational geometry. Based on technical problems of 3-D prestack time migration existing in inhomogeneous media and requirements from nonuniform grid, parallel process and visualization, the thesis was studied systematically on three aspects: Infrastructure of velocity varies laterally Green function traveltime computation on ununiform grid, parallel computational of kirchhoff integral migration and 3D visualization, by combining integral migration theory and Computational Geometry. The results will provide powerful technical support to the implement of prestack time migration and convenient compute infrastructure of wave number domain simulation in inhomogeneous media. The main results were obtained as follows: 1. Symbol of one way wave Lie algebra integral, phase and green function traveltime expressions were analyzed, and simple 2-D expression of Lie algebra integral symbol phase and green function traveltime in time domain were given in inhomogeneous media by using pseudo-differential operators’ exponential map and Lie group algorithm preserving geometry structure. Infrastructure calculation of five parts, including derivative, commutating operator, Lie algebra root tree, exponential map root tree and traveltime coefficients , was brought forward when calculating asymmetry traveltime equation containing lateral differential in 3-D by this method. 2. By studying the infrastructure calculation of asymmetry traveltime in 3-D based on lateral velocity differential and combining computational geometry, a method to build velocity library and interpolate on velocity library using triangulate was obtained, which fit traveltime calculate requirements of parallel time migration and velocity estimate. 3. Combining velocity library triangulate and computational geometry, a structure which was convenient to calculate differential in horizontal, commutating operator and integral in vertical was built. Furthermore, recursive algorithm, for calculating architecture on lie algebra integral and exponential map root tree (Magnus in Math), was build and asymmetry traveltime based on lateral differential algorithm was also realized. 4. Based on graph theory and computational geometry, a minimum cycle method to decompose area into polygon blocks, which can be used as topological representation of migration result was proposed, which provided a practical method to block representation and research to migration interpretation results. 5. Based on MPI library, a process of bringing parallel migration algorithm at arbitrary sequence traces into practical was realized by using asymmetry traveltime based on lateral differential calculation and Kirchhoff integral method. 6. Visualization of geological data and seismic data were studied by the tools of OpenGL and Open Inventor, based on computational geometry theory, and a 3D visualize system on seismic imaging data was designed.
Resumo:
The real earth is far away from an ideal elastic ball. The movement of structures or fluid and scattering of thin-layer would inevitably affect seismic wave propagation, which is demonstrated mainly as energy nongeometrical attenuation. Today, most of theoretical researches and applications take the assumption that all media studied are fully elastic. Ignoring the viscoelastic property would, in some circumstances, lead to amplitude and phase distortion, which will indirectly affect extraction of traveltime and waveform we use in imaging and inversion. In order to investigate the response of seismic wave propagation and improve the imaging and inversion quality in complex media, we need not only consider into attenuation of the real media but also implement it by means of efficient numerical methods and imaging techniques. As for numerical modeling, most widely used methods, such as finite difference, finite element and pseudospectral algorithms, have difficulty in dealing with problem of simultaneously improving accuracy and efficiency in computation. To partially overcome this difficulty, this paper devises a matrix differentiator method and an optimal convolutional differentiator method based on staggered-grid Fourier pseudospectral differentiation, and a staggered-grid optimal Shannon singular kernel convolutional differentiator by function distribution theory, which then are used to study seismic wave propagation in viscoelastic media. Results through comparisons and accuracy analysis demonstrate that optimal convolutional differentiator methods can solve well the incompatibility between accuracy and efficiency, and are almost twice more accurate than the same-length finite difference. They can efficiently reduce dispersion and provide high-precision waveform data. On the basis of frequency-domain wavefield modeling, we discuss how to directly solve linear equations and point out that when compared to the time-domain methods, frequency-domain methods would be more convenient to handle the multi-source problem and be much easier to incorporate medium attenuation. We also prove the equivalence of the time- and frequency-domain methods by using numerical tests when assumptions with non-relaxation modulus and quality factor are made, and analyze the reason that causes waveform difference. In frequency-domain waveform inversion, experiments have been conducted with transmission, crosshole and reflection data. By using the relation between media scales and characteristic frequencies, we analyze the capacity of the frequency-domain sequential inversion method in anti-noising and dealing with non-uniqueness of nonlinear optimization. In crosshole experiments, we find the main sources of inversion error and figure out how incorrect quality factor would affect inverted results. When dealing with surface reflection data, several frequencies have been chosen with optimal frequency selection strategy, with which we use to carry out sequential and simultaneous inversions to verify how important low frequency data are to the inverted results and the functionality of simultaneous inversion in anti-noising. Finally, I come with some conclusions about the whole work I have done in this dissertation and discuss detailly the existing and would-be problems in it. I also point out the possible directions and theories we should go and deepen, which, to some extent, would provide a helpful reference to researchers who are interested in seismic wave propagation and imaging in complex media.
Resumo:
This paper analyzes landsliding process by nonlinear theories, especially the influence mechanism of external factors (such as rainfall and groundwater) on slope evolution. The author investigates landslide as a consequence of the catastrophic slide of initially stationary or creeping slope triggered by a small perturbation. A fully catastrophe analysis is done for all possible scenarios when a continuous change is imposed to the control parameters. As the slip surface continues and erosion due to rainfall occurs, control parameters of the slip surface may evolve such that a previously stable slope may become unstable (e.g. catastrophe occurs), when a small perturbation is imposed. Thus the present analysis offers a plausible explanation to why slope failure occurs at a particular rainfall, which is not the largest in the history of the slope. It is found, by analysis on the nonlinear dynamical model of the evolution process of slope built, that the relationship between the action of external environment factors and the response of the slope system is complicatedly nonlinear. When the nonlinear action of slope itself is equivalent to the acting ability of external environment, the chaotic phenomenon appears in the evolution process of slope, and its route leading to chaos is realized with bifurcation of period-doublings. On the basis of displacement time series of the slope, a nonlinear dynamic model is set up by improved Backus generalized linear inversion theory in this paper. Due to the equivalence between autonomous gradient system and catastrophe model, a standard cusp catastrophe model can be obtained through variable substitution. The method is applied to displacement data of Huangci landslide and Wolongsi landslide, to show how slopes evolve before landsliding. There is convincing statistical evidence to believe that the nonlinear dynamic model can make satisfied prediction results. Most important of all, we find that there is a sudden fall of D, which indicates the occurrence of catastrophe (when D=0).
Resumo:
3D wave equation prestack depth migration is the effective tool for obtaining the exact imaging result of complex geology structures. It's a part of the 3D seismic data processing. 3D seismic data processing belongs to high dimension signal processing, and there are some difficult problems to do with. They are: How to process high dimension operators? How to improve the focusing? and how to construct the deconvolution operator? The realization of 3D wave equation prestack depth migration, not only realized the leap from poststack to prestack, but also provided the important means to solve the difficult problems in high dimension signal processing. In this thesis, I do a series research especially for the solve of the difficult problems around the 3D wave equation prestack depth migration and using it as a mean. So this thesis service for the realization of 3D wave equation prestack depth migration for one side and improve the migration effect for another side. This thesis expatiates in five departs. Summarizes the main contents as the follows: In the first part, I have completed the projection from 3D data point area to low dimension are using de big matrix transfer and trace rearrangement, and realized the liner processing of high dimension signal. Firstly, I present the mathematics expression of 3D seismic data and the mean according to physics, present the basic ideal of big matrix transfer and describe the realization of five transfer models for example. Secondly, I present the basic ideal and rules for the rearrange and parallel calculate of 3D traces, and give a example. In the conventional DMO focusing method, I recall the history of DM0 process firstly, give the fundamental of DMO process and derive the equation of DMO process and it's impulse response. I also prove the equivalence between DMO and prestack time migration, from the kinematic character of DMO. And derive the relationship between DMO base on wave equation and prestack time migration. Finally, I give the example of DMO process flow and synthetic data of theoretical models. In the wave equation prestak depth migration, I firstly recall the history of migration from time to depth, from poststack to prestack and from 2D to 3D. And conclude the main migration methods, point out their merit and shortcoming. Finally, I obtain the common image point sets using the decomposed migration program code.In the residual moveout, I firstly describe the Viterbi algorithm based on Markov process and compound decision theory and how to solve the shortest path problem using Viterbi algorithm. And based on this ideal, I realized the residual moveout of post 3D wave equation prestack depth migration. Finally, I give the example of residual moveout of real 3D seismic data. In the migration Green function, I firstly give the concept of migration Green function and the 2D Green function migration equation for the approximate of far field. Secondly, I prove the equivalence of wave equation depth extrapolation algorithms. And then I derive the equation of Green function migration. Finally, I present the response and migration result of Green function for point resource, analyze the effect of migration aperture to prestack migration result. This research is benefit for people to realize clearly the effect of migration aperture to migration result, and study on the Green function deconvolution to improve the focusing effect of migration.
Resumo:
Spatial population data, obtained through the pixeling method, makes many related researches more convenient. However, the limited methods of precision analysis prevent the spread of spatial distribution methods and cumber the application of the spatial population data. This paper systematically analyzes the different aspects of the spatial population data precision, and re-calculates them with the reformed method, which makes breakthrough for the spread of the pixeling method and provides support and reference for the application of spatial population data. The paper consists of the following parts: (2) characters of the error; (2) origins of the error; (3) advancement on the calculating methods of the spatial population data. In the first place, based on the analysis of the error trait, two aspects of the spatial population data precision are characterized and analyzed: numerical character and spatial distributing character. The later one, placed greater emphasis on in this paper, is depicted in two spatial scales: county and town. It is always essential and meaningful to the research in this paper that spatial distribution is as important as numerical value in analyzing error of the spatial distributed data. The result illustrates that the spatial population data error appears spatially in group, although it is random in the aspect of data statistics, all of that shows there lies spatial systematic error. Secondly, this paper comes to conclude and validate the lineal correlation between the residential land area (from 1:50000 map and taken as real area) and population. Meanwhile, it makes particular analysis on the relationship between the residential land area, which is obtained from the land use map and the population in three different spatial scales: village, town and county, and makes quantitative description of the residential density variation in different topological environment. After that, it analyzes the residential distributing traits and precision. With the consideration of the above researches, it reaches the conclusion that the error of the spatial distributed population is caused by a series of factors, such as the compactness of the residents, loss of the residential land, the population density of the city. Eventually, the paper ameliorates the method of pixeling the population data with the help of the analysis on error characters and causes. It tests 2-class regionalization based on the 1-class regionalization of China, and resorts the residential data from the land use map. In aid of GIS and the comprehensive analysis of various data source, it constructs models in each 2-class district to calculate spatial population data. After all, LinYi Region is selected as the study area. In this area, spatial distributing population is calculated and the precision is analyzed. All it illustrates is that new spatial distributing population has been improved much. The research is fundamental work. It adopts large amounts of data in different types and contains many figures to make convincing and detailed conclusions.
Resumo:
Fe-B ultrafine amorphous alloy particles (UFAAP) were prepared by chemical reduction of Fe3+ with NaBHO4 and confirmed to be ultrafine amorphous particles by transmission electron microscopy and X-ray diffraction. The specific heat of the sample was measured by a high precision adiabatic calorimeter, and a differential scanning calorimeter was used for thermal stability analysis. A topological structure of Fe-B atoms is proposed to explain two crystallization peaks and a melting peak observed at T=600, 868 and 1645 K, respectively.
Resumo:
Job Burnout has been a focus of the Occupational Stress Research. As a typical,helping occupation, teacher has attracted widely attention and researches in the areas of pedagogy and psychology. The special subgroup of teacher, headmasters who are the elites of the Basic Education, is ignored. The research about principals’ Job Burnout is nearly blank after analyzing related documents and information. With the development of the society, people pay more and more attention to the education and put more demands on the headmasters, especially middle-school principals. They are required not only to be good educators, who are equipped with all the inner qualities as a teacher, but also good managers. So the main purpose of this research was to compare the principal group with ordinary teacher group, and reveal underling factors, such as background variables and psychological protection variables. A representative sample of Wenzhou middle school principals sized 192 and a sample of middle school teacher sized 302 were sampled from various schools. The educational version of burnout inventory, self consistency scale, and interpersonal trust scale were administrated to the two samples, together with some demographic variables of interest. The applicability and equivalence of the three instruments used in this study were checked. Based on well-established reliability and cross-sample congruence of measures, the difference between principals and teachers was test. Then the contributing factors were analysis gradually. The five background variables were examined one by one in the two samples separately. A multiple covariance analysis was conducted to test whether there remained any difference between these two samples on the variables of interest. Regression analysis was used to further control the effect of self harmony and interpersonal trust to test the difference between two samples. Mediating analysis was conducted to build the relationship among the three constructs. The main results of the research were stated as following: 1. The internal consistency coefficients of all the scales were good, and no difference exited between the two groups. The measurement equivalence of three instruments was established well. The measures could be applied to and comparing the two samples. 2. The self-harmony, and interpersonal trust of principals were better than the ordinary middle-school teachers. Job Burnout of principals was significant lower than teachers. 3. Demographic variables like the gender, age groups, income levels, disricts, and the type of school, were important influencing factors. The difference patterns of the variables on these five variables in two samples had similarity and distinction. 4. After controlling the background variables, there remained significant difference between principals and teachers on the variables of interest. 5. Job Burnout negatively correlated with self-harmony and interpersonal trust. That is to say,the lower the degree of self-harmony and interpersonal are, the serious of the Job Burnout is, The correlation between the self-harmony and the interpersonal trust was positive. 6. After statistically controlling the background variables and psychological variables, there still exited significant difference between two groups of this study. Also, self harmony and interpersonal trust were significant protection predictors to different aspect of job burnout. 7. Mediating analysis was conducted to the residual score of the three constructs after controlling the five variables and group membership. Self harmony partially mediated the relationship between interpersonal trust and job burnout. That is, interpersonal trust had indirect effect to burnout mediated by self harmony, also had direct effect to burnout.
Resumo:
One of the great puzzles in the psychology of visual perception is that the visual world appears to be a coherent whole despite our viewing it through temporally discontinuous series of eye fixations. The investigators attempted to explain this puzzle from the perspective of sequential visual information integration. In recent years, investigators hypothesized that information maintained in the visual short-term memory (VSTM) could become visual mental images gradually during time delay in visual buffer and integrated with information perceived currently. Some elementary studies had been carried out to investigate the integration between VSTM and visual percepts, but further research is required to account for several questions on the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information. Based on the theory of similarity between visual mental image and visual perception, this research (including three studies) employed the temporal integration paradigm and empty cell localization task to further explore the spatial-temporal characteristics, information representation and mechanism of integrating sequential visual information (sequential arrays). The purpose of study 1 was to further explore the temporal characteristics of sequential visual information integration by examining the effects of encoding time of sequential stimuli on the integration of sequential visual information. The purpose of study 2 was to further explore the spatial characteristics of sequential visual information integration by investigating the effects of spatial characteristics change on the integration of sequential visual information. The purpose of study 3 was to explore the information representation of information maintained in the VSTM and integration mechanism in the process of integrating sequential visual information by employing the behavioral experiments and eye tracking technology. The results indicated that: (1) Sequential arrays could be integrated without strategic instruction. Increasing the duration of the first array could cause improvement in performance and increasing the duration of the second array could not improve the performance. Temporal correlation model was not fit to explain the sequential array integration under long-ISI conditions. (2) Stimuli complexity influenced not only the overall performance of sequential arrays but also the values of ISI at asymptotic level of performance. Sequential arrays still could be integrated when the spatial characteristics of sequential arrays changed. During ISI, constructing and manipulating of visual mental image of array 1 were two separate processing phases. (3) During integrating sequential arrays, people represented the pattern constituted by the objects' image maintained in the VSTM and the topological characteristics of the objects' image had some impact on fixation location. The image-perception integration hypothesis was supported when the number of dots in array 1 was less than empty cells, and the convert-and-compare hypothesis was supported when the number of the dot in array 1 was equal to or more than empty cells. These findings not only contribute to make people understand the process of sequential visual information integration better, but also have significant practical application in the design of visual interface.
Resumo:
During the past 11 years, with the rapid development of the Internet, more and more psychologists began to realize and take advantage of it, which led to a growing number of psychological test administrated on the internet for data collection. But there were some controversy about the reliability and representatively of this new method. To examine the applicability of the Online Survey and how different types of scales used on the internet, we first reversed the measurement instrument, then from three different levels to investigate the equivalence of online survey and paper-and-pencil assessment, namely, sample level, scale level and item level. Both Classical Test Theory and Item Response Theory were used to analyze the invariance of different types of scale applicability on the internet. The main conclusions of this study could be drawn as follows: 1. In the sample-based study, self-select sample of the online survey was compared to the random sampled sample of paper-and-pencil assessment. The results showed there were no gender difference between them (p>0.05), but the online survey sample was characterized with high qualifications, high-income and younger features (88% of the sample with post-secondary education or above, and 71% aged 20 -29 years). There were significant differences on the scores of all scales between online survey and paper-and-pencil assessment (p<0.01). With demographic controlled, there was no significant difference on the variable of Neurotic between different surveys (p>0.05). 2. With in-group design, it was proved equivalence of the scale of BI (Attitude toward Brand Importance), BT (Attitude toword Brand Switcher), Extraversion, and Conscientiousbess in the compared study in the reliability, construct validity and average scores. 3. On the item level, the results based on the Item Response Theory analysis showed that 2PLM is appropriate for personality and attitude scales. With regard to personality scale, there emerged some items with DIF in the dimensions of Openness to the experience subscale and Agreeable subscale. However, there were no significant differences about the test function. 4. Exploring the psychometrics properties of answer formats of five-, six-, seven-, ten-points, it was showed that different measurement validity between online survey and paper-and-pencil test. It was also described the lower reliability and validity of six-point scale. In conclusion, the results support the application of personality scale online, but for attitude scale, we need to choose prudently.
Resumo:
In non-western society,researches on social development and personality change focused on economic development and social modernization. The present study is aimed at exploring the relationship between the social transformation and personality changes of Chinese people by using so-called indigenous personality measurement of CPAI (Chinese Personality Assessment Inventory). Meanwhile, the influence of CPAI measurement itself and measurement theory were also taken into consideration. In study 1, two sets of CPAI data collected in a 10 year interval were analyzed. At the same time, the CPAI-2 data was analyzed in terms of modernization level of various cities from which the data were collected. However, this study didn’t consider the importance of “equivalence” of the measurement, CPAI. In study 2, we detected DIF (Differential item functioning) across the different period groups to confirm if CPAI was equal to people in different period. In this process, both CTT and IRT method were used. The outcome reminded us that there were some DIF items. In study 3, to make sure that the personality measurement is fair to people in different period, we only saved those items whose DIF effect size lower than 0.01, and used IRT method to estimate test-taker’s personality. Then, cohort analysis was used to explore the pattern of personality change of Chinese people. In study 4, we factor-analyzed the DIF items to find the relation between social transformation and the latent personality variable which were composed of DIF items. From these 4 studies, we could got the following conclusions: (1) The CPAI 22 traits could be divided into two categories, with the changing of age, period and cohort, type I traits didn’t change, they were Logical vs Affective Orientation, Enterprise, Responsibility, Inferiority vs Self-Acceptance, Optimism vs Pessimism, Face, Family, Defensive, Graciousness vs Meanness; While with the changing of age, period and cohort, type II traits changed, they were Leadership, Self vs. Social Orientation, Veraciousness vs Slickness, Traditionalism vs Modernity, Harmony, Renqing, Meticulousness, Extraversion vs Introversion, Emotionality, Practical Mindedness, Internal vs External Locus of Control, Thrift vs Extravagance, Discipline. Meanwhile DIF items measured 5 psychologycial characteristics which changed greatly with the changing of age, period and cohort, they were Life attitude of Cynicism-realism, Psychological maladjustment, Coping style of Waiyuanneifang, Self-efficacy, the value of Individualism. (2) In sum, Chinese people in 1992 were more traditional than those in 2001, and with the 10-year of rapid development, according to the market economy’s needs, Chinese people became more individualism. (3) The DIF method of CTT and IRT were comparable. But, in generally, IRT method was more accurate and valid in detecting DIF as were as estimating personality. (4) The DIF outcomes showed that CPAI had good item validity. Meanwhile, it’s possible to develop a subscale by using CPAI items to assess some psychological characteristics. In this current study, according to their stability and variability, we could divided personality traits and psychological characteristics into 3 categories, and the outcome supported the hypothesis of “Six Factor Model”, these foundings were of some theoretic meanings. Meanwhile, as the relation between social development and personality change being explored, it certain help Chinese people cope with the rapid changing society. In this study, we also found that it’s possible to develop a subscale by using CPAI items to assess obverse personality traits and it had some practical use. Furthermore, the use of different measurement theory and cohort analysis embodied some innovation in methodology.
Resumo:
This paper introduces Denotational Proof Languages (DPLs). DPLs are languages for presenting, discovering, and checking formal proofs. In particular, in this paper we discus type-alpha DPLs---a simple class of DPLs for which termination is guaranteed and proof checking can be performed in time linear in the size of the proof. Type-alpha DPLs allow for lucid proof presentation and for efficient proof checking, but not for proof search. Type-omega DPLs allow for search as well as simple presentation and checking, but termination is no longer guaranteed and proof checking may diverge. We do not study type-omega DPLs here. We start by listing some common characteristics of DPLs. We then illustrate with a particularly simple example: a toy type-alpha DPL called PAR, for deducing parities. We present the abstract syntax of PAR, followed by two different kinds of formal semantics: evaluation and denotational. We then relate the two semantics and show how proof checking becomes tantamount to evaluation. We proceed to develop the proof theory of PAR, formulating and studying certain key notions such as observational equivalence that pervade all DPLs. We then present NDL, a type-alpha DPL for classical zero-order natural deduction. Our presentation of NDL mirrors that of PAR, showing how every basic concept that was introduced in PAR resurfaces in NDL. We present sample proofs of several well-known tautologies of propositional logic that demonstrate our thesis that DPL proofs are readable, writable, and concise. Next we contrast DPLs to typed logics based on the Curry-Howard isomorphism, and discuss the distinction between pure and augmented DPLs. Finally we consider the issue of implementing DPLs, presenting an implementation of PAR in SML and one in Athena, and end with some concluding remarks.
Resumo:
The goal of this work is to navigate through an office environmentsusing only visual information gathered from four cameras placed onboard a mobile robot. The method is insensitive to physical changes within the room it is inspecting, such as moving objects. Forward and rotational motion vision are used to find doors and rooms, and these can be used to build topological maps. The map is built without the use of odometry or trajectory integration. The long term goal of the project described here is for the robot to build simple maps of its environment and to localize itself within this framework.
Resumo:
This dissertation presents a model of the knowledge a person has about the spatial structure of a large-scale environment: the "cognitive map". The functions of the cognitive map are to assimilate new information about the environment, to represent the current position, and to answer route-finding and relative-position problems. This model (called the TOUR model) analyzes the cognitive map in terms of symbolic descriptions of the environment and operations on those descriptions. Knowledge about a particular environment is represented in terms of route descriptions, a topological network of paths and places, multiple frames of reference for relative positions, dividing boundaries, and a structure of containing regions. The current position is described by the "You Are Here" pointer, which acts as a working memory and a focus of attention. Operations on the cognitive map are performed by inference rules which act to transfer information among different descriptions and the "You Are Here" pointer. The TOUR model shows how the particular descriptions chosen to represent spatial knowledge support assimilation of new information from local observations into the cognitive map, and how the cognitive map solves route-finding and relative-position problems. A central theme of this research is that the states of partial knowledge supported by a representation are responsible for its ability to function with limited information of computational resources. The representations in the TOUR model provide a rich collection of states of partial knowledge, and therefore exhibit flexible, "common-sense" behavior.
Resumo:
The equivalence of two ways for the calculation of overlap integrals, i.e. the Sharp Rosenstock generating function method and the Doktorov coherent state method, has been proved. On the basis of the generating function of the overlap integrals, a new closed form expression for the Franck - Condon integrals for overlap multidimensional harmonic oscillators has been exactly derived. In addition, some useful analytical expressions for the calculations of the multimode Franck - Condon factors have been given.
Resumo:
Meng, Q., & Lee, M. (2005). Novelty and Habituation: the Driving Forces in Early Stage Learning for Developmental Robotics. Wermter, S., Palm, G., & Elshaw, M. (Eds.), In: Biomimetic Neural Learning for Intelligent Robots: Intelligent Systems, Cognitive Robotics, and Neuroscience. (pp. 315-332). (Lecture Notes in Computer Science). Springer Berlin Heidelberg.