10 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered
Resumo:
The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
This study aimed to provide a continuing education towards raising teachers for reflection and effective sexual education within the school environment as a possible route of self-education and training of teachers of elementary school. More specifically aim to facilitate through the Continuing Education to discuss the knowledge of the body and knowledge of human sexuality, presenting them as core knowledge in the integral formation of individuals and promote discussion of a Human-centered education Teaching in a vocational training and human .In this sense, we dialogue with the human teaching preconized by Arroyo (2002, 2004) along with the humanization (hominization) of the individuals through education, under Freire´s perspective of the being more (2003) as well as his ideas, Pineau´s (2003) and Josso´s (2004) about the educational practice understanding as a way to build up the autonomy of the individuals who we intend to educate. We defend the inclusion of the body as an essential learning element according to the principles of corporeity presented by Assman (2001), whose comprehension is that every learning experience has a corporal inscription. Furthermore, the knowledge about human sexuality cannot be excluded from this process since the sexuality is inherent of individuals and is constructed and reconstructed during their existence. Our view of the world and of man is supported by the knowledge of the complexity (Morim, 2004) trying to surpass the mechanist view that sees them through the duality view, fragmenting them. For the discussion and construction of knowledge that search for the confluence of these knowledges about the being and the educational practice, aiming at the individual integral formation starting from the process of self-formation/self-knowledge, we´ve directed our research-action-formation having as compass the theoretical-methodological postulate of the research-action (Barbier, 2002; Morin, 2004; Thiollent, 2004) because it makes the participation of all the involved people in the process of resolution or surpassing of problem solving possible. We´ve used the continuing formation as a way of access for data collection, applying a questionnaire with open questions for the ones involved in the research. Based on the findings it´s been possible to infer that for the teaching formation it is necessary the inclusion of the Human sexuality and corporeity theme, so that the teacher can surpass the biological view of sexuality and also the expansion of the mechanist view of the body. To do so, we suggest that the teaching formation happens supported by the teaching capacitation and formation according to Maturana (2004), bringing teaching knowledges (Tardiff, 2002), which contribute effectively for the responsibility to educate people for life.
Resumo:
This study discusses the evaluation of the English language‟s learning developed in a public high school from Lajes-RN in 2011 starting from a qualitative evaluation proposal (SAUL 1988; CANAN, 1996; DEMO, 2008) aiming the production of knowledge about the evaluation process developed in the classes of English language involving the contributions from students. To diagnose and characterize the evaluation process of English language of the researched school, identifying the representations that students attributed to the evaluation, we have implemented the evaluation instruments suggested by students to perform the evaluation of language learning and allowed a reflection about the student‟s participation in the construction of the evaluation process of the English language, subject discussed by Sant‟anna (2002) and other theorists (CANAN, 1996; BRAZIL, 2002; PEREIRA, 2009). To conduct the research work, we use the qualitative approach with ethnographic basis, substantiate in authors like Bogdan, Biklen (1994), Mazzotti; Gewandsznajder (1998), Strauss, Corbin (2008) among others. The methodology was the action research (ANDRÉ, 1995; NUNAN, 2007; LANKSHEAR; KNOBEL, 2008) described as research of empirical basis which associates an action with a resolution of a collective problem, because in it, its researchers and employees are engaged in a cooperatively way (THIOLLENT, 1985). When we treat about the evaluation of English language‟s learning (ALMEIDA FILHO, 1993; SCARAMUCCI, 2009) practiced before and after the contributions made by the students of the second year of the refereed school, the study considers that high school students have a more critical and reflective conscience with regard to their evaluations, not just opining on the assessment of learning English but also about the assessment of other subjects from their scholar curriculum and so this research presents possibilities for performing the act of evaluation which consider the participation of students in decisions regarding this process, because we cogitate that when the teachers share the decisions with their students, teachers can add quality to the evaluation process
Resumo:
There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.
Resumo:
In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.