973 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study discusses the evaluation of the English language‟s learning developed in a public high school from Lajes-RN in 2011 starting from a qualitative evaluation proposal (SAUL 1988; CANAN, 1996; DEMO, 2008) aiming the production of knowledge about the evaluation process developed in the classes of English language involving the contributions from students. To diagnose and characterize the evaluation process of English language of the researched school, identifying the representations that students attributed to the evaluation, we have implemented the evaluation instruments suggested by students to perform the evaluation of language learning and allowed a reflection about the student‟s participation in the construction of the evaluation process of the English language, subject discussed by Sant‟anna (2002) and other theorists (CANAN, 1996; BRAZIL, 2002; PEREIRA, 2009). To conduct the research work, we use the qualitative approach with ethnographic basis, substantiate in authors like Bogdan, Biklen (1994), Mazzotti; Gewandsznajder (1998), Strauss, Corbin (2008) among others. The methodology was the action research (ANDRÉ, 1995; NUNAN, 2007; LANKSHEAR; KNOBEL, 2008) described as research of empirical basis which associates an action with a resolution of a collective problem, because in it, its researchers and employees are engaged in a cooperatively way (THIOLLENT, 1985). When we treat about the evaluation of English language‟s learning (ALMEIDA FILHO, 1993; SCARAMUCCI, 2009) practiced before and after the contributions made by the students of the second year of the refereed school, the study considers that high school students have a more critical and reflective conscience with regard to their evaluations, not just opining on the assessment of learning English but also about the assessment of other subjects from their scholar curriculum and so this research presents possibilities for performing the act of evaluation which consider the participation of students in decisions regarding this process, because we cogitate that when the teachers share the decisions with their students, teachers can add quality to the evaluation process

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Continuation methods have been long used in P-V curve tracing due to their efficiency in the resolution of ill-conditioned cases, with close to singular Jacobian matrices, such as the maximum loading point of power systems. Several parameterization techniques have been proposed to avoid matrix singularity and successfully solve those cases. This paper presents a simple geometric parameterization technique to overcome the singularity of the Jacobian matrix by the addition of a line equations located at the plane determined by a bus voltage magnitude and the loading factor. This technique enlarges the set of voltage variables that can be used to whole P-V curve tracing, without ill-conditioning problems and no need of parameter changes. Simulation results, obtained for large realistic Brazilian and American power systems, show that the robustness and efficiency of the conventional power flow are not only preserved but also improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dentre as pesquisas empreendidas no campo da epidemiologia, um grupo específico aborda patologias de etiologia desconhecida ou não totalmente compreendidas. É dentro deste grupo que estão situadas as desordens temporomandibulares (DTM). Três estratégias observacionais básicas têm sido utilizadas para abordar o papel etiológico da má oclusão no desenvolvimento das DTM, dentro do repertório epidemiológico. São elas: estudos do tipo transversal, estudos de caso controle e estudos de coorte. Alguns experimentos clínicos são realizados com base na remoção do fator etiológico suspeito. Com base em uma revisão estruturada da literatura, a partir da metodologia empregada nos estudos selecionados, podemos concluir que a definição dos possíveis fatores etiológicos relacionados a subgrupos específicos de DTM é fundamental para que o papel das más oclusões no desenvolvimento destas desordens, embora pareça pequeno quando baseado nas evidências disponíveis, não seja subestimado. Pode ser útil a caracterização de uma oclusão normal como aquela associada como o menor risco para o desenvolvimento de problemas de DTM, mas é provavelmente inapropriada a aplicação destes parâmetros para reverter um problema intra-capsular já estabelecido. O conceito de uma oclusão de baixo fator de risco implicaria em um pequeno desvio entre RC e MIH, pequeno transpasse horizontal, transpasse vertical positivo e ausência de mordida cruzada posterior. Este conceito é compatível com o conceito de oclusão normal defendido por décadas, embora uma variação do normal ao invés de um critério absoluto deva ser permitida. Embora provavelmente seja prudente estabelecer metas morfológicas terapêuticas que busquem o que é observado em oclusões não tratadas julgadas normais ou ideais, o estabelecimento de uma oclusão que alcance todos os critérios gnatológicos, por meio de tratamento ortodôntico, talvez seja impossível e provavelmente desnecessário.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To ensure high accuracy results from GPS relative positioning, the multipath effects have to be mitigated. Although the careful selection of antenna site and the use of especial antennas and receivers can minimize multipath, it cannot always be eliminated and frequently the residual multipath disturbance remains as the major error in GPS results. The high-frequency multipath from large delays can be attenuated by double difference (DD) denoising methods. But the low-frequency multipath from short delays is very difficult to be reduced or modeled. In this paper, it is proposed a method based on wavelet regression (WR), which can effectively detect and reduce the low-frequency multipath. The wavelet technique is firstly applied to decompose the DD residuals into the low-frequency bias and high-frequency noise components. The extracted bias components by WR are then directly applied to the DD observations to correct them from the trend. The remaining terms, largely characterized by the high-frequency measurement noise, are expected to give the best linear unbiased solutions from a least-squares (LS) adjustment. An experiment was carried out using objects placed close to the receiver antenna to cause, mainly, low-frequency multipath. The data were collected for two days to verify the multipath repeatability. The ground truth coordinates were computed with data collected in the absence of the reflector objects. The coordinates and ambiguity solution were compared with and without the multipath mitigation using WR. After mitigating the multipath, ambiguity resolution became more reliable and the coordinates were more accurate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, L1 SBAS signals can be used in a combined GPS+SBAS data processing. However, such situation restricts the studies over short baselines. Besides of increasing the satellite availability, SBAS satellites orbit configuration is different from that of GPS. In order to analyze how these characteristics can impact GPS positioning in the southeast area of Brazil, experiments involving GPS-only and combined GPS+SBAS data were performed. Solutions using single point and relative positioning were computed to show the impact over satellite geometry, positioning accuracy and short baseline ambiguity resolution. Results showed that the inclusion of SBAS satellites can improve the accuracy of positioning. Nevertheless, the bad quality of the data broadcasted by these satellites limits their usage. © Springer-Verlag Berlin Heidelberg 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciências da Motricidade - IBRC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider a nonparametric regression model Y=mu*(X) + e, where the explanatory variables X are endogenous and e satisfies the conditional moment restriction E[e|W]=0 w.p.1 for instrumental variables W. It is well known that in these models the structural parameter mu* is 'ill-posed' in the sense that the function mapping the data to mu* is not continuous. In this paper, we derive the efficiency bounds for estimating linear functionals E[p(X)mu*(X)] and int_{supp(X)}p(x)mu*(x)dx, where p is a known weight function and supp(X) the support of X, without assuming mu* to be well-posed or even identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we show how number theoretical problems can be fruitfully approached with the tools of statistical physics. We focus on g-Sidon sets, which describe sequences of integers whose pairwise sums are different, and propose a random decision problem which addresses the probability of a random set of k integers to be g-Sidon. First, we provide numerical evidence showing that there is a crossover between satisfiable and unsatisfiable phases which converts to an abrupt phase transition in a properly defined thermodynamic limit. Initially assuming independence, we then develop a mean-field theory for the g-Sidon decision problem. We further improve the mean-field theory, which is only qualitatively correct, by incorporating deviations from independence, yielding results in good quantitative agreement with the numerics for both finite systems and in the thermodynamic limit. Connections between the generalized birthday problem in probability theory, the number theory of Sidon sets and the properties of q-Potts models in condensed matter physics are briefly discussed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.