878 resultados para positivity preserving
Resumo:
The Second Round of Oil & Gas Exploration needs more precision imaging method, velocity vs. depth model and geometry description on Complicated Geological Mass. Prestack time migration on inhomogeneous media was the technical basic of velocity analysis, prestack time migration on Rugged surface, angle gather and multi-domain noise suppression. In order to realize this technique, several critical technical problems need to be solved, such as parallel computation, velocity algorithm on ununiform grid and visualization. The key problem is organic combination theories of migration and computational geometry. Based on technical problems of 3-D prestack time migration existing in inhomogeneous media and requirements from nonuniform grid, parallel process and visualization, the thesis was studied systematically on three aspects: Infrastructure of velocity varies laterally Green function traveltime computation on ununiform grid, parallel computational of kirchhoff integral migration and 3D visualization, by combining integral migration theory and Computational Geometry. The results will provide powerful technical support to the implement of prestack time migration and convenient compute infrastructure of wave number domain simulation in inhomogeneous media. The main results were obtained as follows: 1. Symbol of one way wave Lie algebra integral, phase and green function traveltime expressions were analyzed, and simple 2-D expression of Lie algebra integral symbol phase and green function traveltime in time domain were given in inhomogeneous media by using pseudo-differential operators’ exponential map and Lie group algorithm preserving geometry structure. Infrastructure calculation of five parts, including derivative, commutating operator, Lie algebra root tree, exponential map root tree and traveltime coefficients , was brought forward when calculating asymmetry traveltime equation containing lateral differential in 3-D by this method. 2. By studying the infrastructure calculation of asymmetry traveltime in 3-D based on lateral velocity differential and combining computational geometry, a method to build velocity library and interpolate on velocity library using triangulate was obtained, which fit traveltime calculate requirements of parallel time migration and velocity estimate. 3. Combining velocity library triangulate and computational geometry, a structure which was convenient to calculate differential in horizontal, commutating operator and integral in vertical was built. Furthermore, recursive algorithm, for calculating architecture on lie algebra integral and exponential map root tree (Magnus in Math), was build and asymmetry traveltime based on lateral differential algorithm was also realized. 4. Based on graph theory and computational geometry, a minimum cycle method to decompose area into polygon blocks, which can be used as topological representation of migration result was proposed, which provided a practical method to block representation and research to migration interpretation results. 5. Based on MPI library, a process of bringing parallel migration algorithm at arbitrary sequence traces into practical was realized by using asymmetry traveltime based on lateral differential calculation and Kirchhoff integral method. 6. Visualization of geological data and seismic data were studied by the tools of OpenGL and Open Inventor, based on computational geometry theory, and a 3D visualize system on seismic imaging data was designed.
Resumo:
In exploration geophysics,velocity analysis and migration methods except reverse time migration are based on ray theory or one-way wave-equation. So multiples are regarded as noise and required to be attenuated. It is very important to attenuate multiples for structure imaging, amplitude preserving migration. So it is an interesting research in theory and application about how to predict and attenuate internal multiples effectively. There are two methods based on wave-equation to predict internal multiples for pre-stack data. One is common focus point method. Another is inverse scattering series method. After comparison of the two methods, we found that there are four problems in common focus point method: 1. dependence of velocity model; 2. only internal multiples related to a layer can be predicted every time; 3. computing procedure is complex; 4. it is difficult to apply it in complex media. In order to overcome these problems, we adopt inverse scattering series method. However, inverse scattering series method also has some problems: 1. computing cost is high; 2. it is difficult to predict internal multiples in the far offset; 3. it is not able to predict internal multiples in complex media. Among those problems, high computing cost is the biggest barrier in field seismic processing. So I present 1D and 1.5D improved algorithms for reducing computing time. In addition, I proposed a new algorithm to solve the problem which exists in subtraction, especially for surface related to multiples. The creative results of my research are following: 1. derived an improved inverse scattering series prediction algorithm for 1D. The algorithm has very high computing efficiency. It is faster than old algorithm about twelve times in theory and faster about eighty times for lower spatial complexity in practice; 2. derived an improved inverse scattering series prediction algorithm for 1.5D. The new algorithm changes the computing domain from pseudo-depth wavenumber domain to TX domain for predicting multiples. The improved algorithm demonstrated that the approach has some merits such as higher computing efficiency, feasibility to many kinds of geometries, lower predictive noise and independence to wavelet; 3. proposed a new subtraction algorithm. The new subtraction algorithm is not used to overcome nonorthogonality, but utilize the nonorthogonality's distribution in TX domain to estimate the true wavelet with filtering method. The method has excellent effectiveness in model testing. Improved 1D and 1.5D inverse scattering series algorithms can predict internal multiples. After filtering and subtracting among seismic traces in a window time, internal multiples can be attenuated in some degree. The proposed 1D and 1.5D algorithms have demonstrated that they are effective to the numerical and field data. In addition, the new subtraction algorithm is effective to the complex theoretic models.
Resumo:
The seismic survey is the most effective geophysical method during exploration and development of oil/gas. As a main means in processing and interpreting seismic data, impedance inversion takes up a special position in seismic survey. This is because the impedance parameter is a ligament which connects seismic data with well-logging and geological information, while it is also essential in predicting reservoir properties and sand-body. In fact, the result of traditional impedance inversion is not ideal. This is because the mathematical inverse problem of impedance is poor-pose so that the inverse result has instability and multi-result, so it is necessary to introduce regularization. Most simple regularizations are presented in existent literature, there is a premise that the image(or model) is globally smooth. In fact, as an actual geological model, it not only has made of smooth region but also be separated by the obvious edge, the edge is very important attribute of geological model. It's difficult to preserve these characteristics of the model and to avoid an edge too smooth to clear. Thereby, in this paper, we propose a impedance inverse method controlled by hyperparameters with edge-preserving regularization, the inverse convergence speed and result would be improved. In order to preserve the edge, the potential function of regularization should satisfy nine conditions such as basic assumptions edge preservation and convergence assumptions etc. Eventually, a model with clear background and edge-abnormity can be acquired. The several potential functions and the corresponding weight functions are presented in this paper. The potential functionφLφHL andφGM can meet the need of inverse precision by calculating the models. For the local constant planar and quadric models, we respectively present the neighborhood system of Markov random field corresponding to the regularization term. We linearity nonlinear regularization by using half-quadratic regularization, it not only preserve the edge, and but also simplify the inversion, and can use some linear methods. We introduced two regularization parameters (or hyperparameters) λ2 and δ in the regularization term. λ2 is used to balance the influence between the data term and the transcendental term; δ is a calibrating parameter used to adjust the gradient value at the discontinuous position(or formation interface). Meanwhile, in the inverse procedure, it is important to select the initial value of hyperparameters and to change hyperparameters, these will then have influence on convergence speed and inverse effect. In this paper, we roughly give the initial value of hyperparameters by using a trend- curve of φ-(λ2, δ) and by a method of calculating the upper limit value of hyperparameters. At one time, we change hyperparameters by using a certain coefficient or Maximum Likelihood method, this can be simultaneously fulfilled with the inverse procedure. Actually, we used the Fast Simulated Annealing algorithm in the inverse procedure. This method overcame restrictions from the local extremum without depending on the initial value, and got a global optimal result. Meanwhile, we expound in detail the convergence condition of FSA, the metropolis receiving probability form Metropolis-Hasting, the thermal procession based on the Gibbs sample and other methods integrated with FSA. These content can help us to understand and improve FSA. Through calculating in the theoretic model and applying it to the field data, it is proved that the impedance inverse method in this paper has the advantage of high precision practicability and obvious effect.
Resumo:
According to the influential dual-route model of reading (Coltheart, Rastle et al. 2001), there are two routes to access the meaning of visual words: one directly by orthography (orthography-semantic) and the other indirectly via the phonology (phonology-semantic). Because of the dramatic difference between written Chinese and alphabetical languages, it is still on debate whether Chinese readers have the same semantic activation processes as readers of alphabetical languages. In this study, the semantic activation processes in alphabetical German and logographic Chinese were compared. Since the N450 for incongruent color words in the Stroop tasks was induced by the semantic conflict between the meaning of the incongruent color words and color naming, this component could be taken as an index for semantic activation of incongruent color words in Stroop tasks. Two cross-script Stroop experiments were adopted to investigate the semantic activation processes in Chinese and German. The first experiment focused on the the role of phonology, while the second one focused on the realative importance of orthography. Cultural differences in cognitive processing between individuals in western and eastern countries have been found (Nisbett & Miyamoto, 2005). In order to exclude potential differences in basic cognitive processes like visual discrimination capabilities during reading, a visual Oddball experiment with non-lexical materials was conducted with all participants. However, as indicated by the P300 elicited by deviant stimuli in both groups, no group difference was observed. In the first Stroop experiments, color words (e.g., “green”), color-word associates (e.g., “grass”), and homophones of color words were used. These words were embedded into color patches with either congruent color (e.g. word “green” in green color patch) or incongruent colors (e.g. word “green” in either red or yellow or blue color patch). The key point is to observe whether homophones in both languages could induce similar behavioral and ERP Stroop effects to that induced by color words. It was also interesting to observe to which extent the N450 was related to the semantic conflicts. Nineteen Chinese adult readers and twenty German adult readers were asked to respond to the back color of these words in the Stroop experiment in their native languages by pressing the corresponding keys. In the behavioral data, incongruent conditions (incongruent color words, incongruent color-word associates, incongruent homophones) had significantly longer reaction times as compared to corresponding congruent conditions. All incongruent conditions in the Geman group elicited an N450 in the 400 to 500 ms time window. In the Chinese group, the N450 in the same time window was also observed for the incongruent color words and incongruent color-word associates. These results indicated that the N450 was very sensitive to semantic conflict-even words with semantic association to colors (e.g. “grass”) could elicite similar N450. However, the N450 was absent for incongruent homophones of color words in the Chinese group. Instead, in a later time window (600-800 ms), incongruent homophones elicited a positivity over left posterior regions as compared to congruent homophones. Similar positivity was also observed for color words in the 700 to 1000 ms time window in the Chinese group and 600 to 1000 ms time window for incongruent color words and homophones in the Geman group. These results indicate that phonology plays an important role in Geman semantic activation processes, but not in Chinese. In the second Stroop experiment, color words and pseudowords which had similiar visual shape to color words in both languages were used as materials. Another group of eighteen Chinese and twenty Germans were involved in the Stroop experiment in their native languages.The ERPs were recorded during their performance. In the behavioral data, strong and comparable Stroop effects (as counted by substract the reaction times in the congruent conditions from reaction times in the incongruent conditions) were observed. In the ERP data, both incongruent color words and incongruent pseudowords elicited an N450 over the whole brain scalp in both groups. These results indicated that orthography played an equally important role in semantic activation processes in both languages. The results of the two Stroop experiments support the view that the semantic activation process in Chiense readers differs significantly from that in German readers. The former rely mainly on the direct route (orthography-semantic), while the latter use both direct route and incirect route (phonology-semantic). These findings also indicate that the characteritics of different languages shape the semantic activation processes.
Resumo:
Zeigarnik effect refers to the enhanced memory performance for unfinished tasks and studies on insight using hemi-visual field presentation technology also find that after failing to solve an problem, hints to the problem are more effective received and lead to insight experience when presented to the left-visual field (Right hemisphere) than presented to the right-visual field, especial when the hints appeared with a delay. Thus, it seems that right hemisphere may play an important role in preserving information of unsolved problems and processing related cues. To further examine the finding above, we introduce an Chinese character chunking task to investigate the brain activities during the stage of failure to resolve problems and of hint presentation using Event-Related Potentials (ERP) and functional MRI technology. Our FMRI results found that bilateral BA10 showed more activation when seeing hints for unsolved problems and we proposed that it may reflect the processes of information to failure problems, howerver, there was no hemispheric difference. The ERP results after the effort to the problems showed that unsolved problems elicited a more positive P150 over the right frontal cortex while solved problems demonstrated a left hemispheric advantage of P150. When hints present, P2 amplitudes of hints were modulated by the status of problem only in the right hemisphere but not in the left hemisphere. Our results confirmed the hypothesis that failure to solve problems would trigger the perseverance processes in right hemisphere, which would make right hemisphere more sensitive to related information of failure problems.
Resumo:
The purpose of this study was to examine the cognitive and neural mechanism underlying the serial position effects using cognitive experiments and ERPs(the event related potentials), for 11 item lists in very short-term and the continuous-distractor paradigm with Chinese character. The results demonstrated that when the length of list was 11 Chinese character, and the presentation time, the item interval and the retention interval was 400ms, the primacy effect and recency effect belong to the associative memory and absolute memory respectively. The retrieval of the item at the primacy part depended mainly on the context cues, but the retrieval of the item at the recency part depended mainly on the memory trace. The same results was concluded in the continuous-distractor paradigm (the presentation time was 1sec, the item interval is 12sec, and the retention interval was 30sec). Cognitive results revealed the robust serial position effects in the continuous-distractor paradigm. The different retrieval process between items at the primacy part and items at the recency part of the serial position curve was found. The behavioral responses data of ERP illustrated that the responses for the prime and recent items differed neither in accuracy nor reaction time, the retrieval time for the items at the primacy part was longer than that for the items at the recency part. And the accuracy of retrieval for the primacy part item was lower than that for the recency part items. That meant the retrieval of primacy part items needed more cognitive processes. The recent items, compared with the prime items, evoked ERPs that were more positive, this enhanced positivity occurred in a positive component peaking around 360ms. And for the same retrieval direction (forward or backward), the significant positive component difference between the retrieval for prime items and the retrieval for recent items was found. But there was no significant difference between the forward and backward retrieval at both the primacy and recency part of the serial position curve. These revealed the two kind of retrieval (forward and backward) at the same part of the serial position curve belonged to the same property. These findings fit more closely with the notion of the distinct between the associative memory and the absolute memory.
Resumo:
Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Resumo:
This report studies when and why two Hidden Markov Models (HMMs) may represent the same stochastic process. HMMs are characterized in terms of equivalence classes whose elements represent identical stochastic processes. This characterization yields polynomial time algorithms to detect equivalent HMMs. We also find fast algorithms to reduce HMMs to essentially unique and minimal canonical representations. The reduction to a canonical form leads to the definition of 'Generalized Markov Models' which are essentially HMMs without the positivity constraint on their parameters. We discuss how this generalization can yield more parsimonious representations of stochastic processes at the cost of the probabilistic interpretation of the model parameters.
Resumo:
This work addresses two related questions. The first question is what joint time-frequency energy representations are most appropriate for auditory signals, in particular, for speech signals in sonorant regions. The quadratic transforms of the signal are examined, a large class that includes, for example, the spectrograms and the Wigner distribution. Quasi-stationarity is not assumed, since this would neglect dynamic regions. A set of desired properties is proposed for the representation: (1) shift-invariance, (2) positivity, (3) superposition, (4) locality, and (5) smoothness. Several relations among these properties are proved: shift-invariance and positivity imply the transform is a superposition of spectrograms; positivity and superposition are equivalent conditions when the transform is real; positivity limits the simultaneous time and frequency resolution (locality) possible for the transform, defining an uncertainty relation for joint time-frequency energy representations; and locality and smoothness tradeoff by the 2-D generalization of the classical uncertainty relation. The transform that best meets these criteria is derived, which consists of two-dimensionally smoothed Wigner distributions with (possibly oriented) 2-D guassian kernels. These transforms are then related to time-frequency filtering, a method for estimating the time-varying 'transfer function' of the vocal tract, which is somewhat analogous to ceptstral filtering generalized to the time-varying case. Natural speech examples are provided. The second question addressed is how to obtain a rich, symbolic description of the phonetically relevant features in these time-frequency energy surfaces, the so-called schematic spectrogram. Time-frequency ridges, the 2-D analog of spectral peaks, are one feature that is proposed. If non-oriented kernels are used for the energy representation, then the ridge tops can be identified, with zero-crossings in the inner product of the gradient vector and the direction of greatest downward curvature. If oriented kernels are used, the method can be generalized to give better orientation selectivity (e.g., at intersecting ridges) at the cost of poorer time-frequency locality. Many speech examples are given showing the performance for some traditionally difficult cases: semi-vowels and glides, nasalized vowels, consonant-vowel transitions, female speech, and imperfect transmission channels.
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
R.J. DOUGLAS, Non-existence of polar factorisations and polar inclusion of a vector-valued mapping. Intern. Jour. Of Pure and Appl. Math., (IJPAM) 41, no. 3 (2007).
Resumo:
G.R. BURTON and R.J. DOUGLAS, Uniqueness of the polar factorisation and projection of a vector-valued mapping. Ann. I.H. Poincare ? A.N. 20 (2003), 405-418.
Resumo:
Q. Shen. Rough feature selection for intelligent classifiers. LNCS Transactions on Rough Sets, 7:244-255, 2007.
Resumo:
R. Jensen, Q. Shen, Data Reduction with Rough Sets, In: Encyclopedia of Data Warehousing and Mining - 2nd Edition, Vol. II, 2008.
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária