632 resultados para Penalty kick
Resumo:
We consider the a priori error analysis of hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form under weak assumptions on the mesh design and the local finite element spaces employed. In particular, we prove a priori hp-error bounds for linear target functionals of the solution, on (possibly) anisotropic computational meshes with anisotropic tensor-product polynomial basis functions. The theoretical results are illustrated by a numerical experiment.
Resumo:
As relações estabelecidas entre os bibliotecários e a sociedade inferem diretamente na relação que será instituída entre os usuários de uma unidade de informação e o profissional da informação supracitado, assim levando esse profissional a obter uma imagem e representatividade definida no consciente da população. Tendo em vista que o imaginário popular confere um revés deturpado no que se refere ao real papel do bibliotecário. Neste estudo buscamos analisar acerca da reprodução dos estereótipos conferidos a esses profissionais em desenhos animados. O estudo foi realizado através da análise de dois desenhos animados, o longa metragem “Universidade monstro” e um episódio da animação “Kick Buttowski: Um Projeto de Dublê”, ambos pertencentes a empresa de grande divulgação em massa The Walt Disney Company. Refletir sobre a construção feita em torno da preconcepção do fazer do bibliotecário e suas características, e como podem estagnar o perfil do profissional e impactar negativamente no viés social da profissão do bibliotecário. A pesquisa caracteriza-se como revisão de literatura, abordando diversos pontos de vista de alguns autores. Conclui-se que o bibliotecário necessita empenhar-se no trabalho de desmistificação de sua imagem perante a sociedade e especialmente no imaginário construído na formação pedagógica das crianças que absorvem informações errôneas e reproduzem a visão estereotipada deste profissional.
Resumo:
We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.
Resumo:
This work is concerned with the design and analysis of hp-version discontinuous Galerkin (DG) finite element methods for boundary-value problems involving the biharmonic operator. The first part extends the unified approach of Arnold, Brezzi, Cockburn & Marini (SIAM J. Numer. Anal. 39, 5 (2001/02), 1749-1779) developed for the Poisson problem, to the design of DG methods via an appropriate choice of numerical flux functions for fourth order problems; as an example we retrieve the interior penalty DG method developed by Suli & Mozolevski (Comput. Methods Appl. Mech. Engrg. 196, 13-16 (2007), 1851-1863). The second part of this work is concerned with a new a-priori error analysis of the hp-version interior penalty DG method, when the error is measured in terms of both the energy-norm and L2-norm, as well certain linear functionals of the solution, for elemental polynomial degrees $p\ge 2$. Also, provided that the solution is piecewise analytic in an open neighbourhood of each element, exponential convergence is also proven for the p-version of the DG method. The sharpness of the theoretical developments is illustrated by numerical experiments.
Resumo:
The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.
Resumo:
The text analyses the intelligence activity against Poland in the period 1944-1989. The paper also contains a case study, i.e. an analysis of the American intelligence service activity held against Poland. While examining the research thesis, the author used the documents and analyses prepared by the Ministry of Internal Affairs. In order to best illustrate the point, the author presented a number of cases of persons who spied for the USA, which was possible thanks to the analysis of the training materials of the Ministry of Internal Affairs directed to the officers of the Security Service and the Citizens’ Militia. The text tackles the following issues: (1) to what extent did the character of the socio-political system influence the number of persons convicted for espionage against Poland in the period under examination?, (2) what was the level of interest of the foreign intelligence services in Poland before the year 1990?, (3) is it possible to indicate the specificity of the U.S. intelligence activity against Poland? 1) The analysis of data indicates that the period 1946-1956 witnessed a great number of convictions for espionage, which is often associated with the peculiar political situation in Poland of that time. Up to 1953, the countries of the Eastern bloc had reproduced the Stalin’s system, which only ceased due to the death of Stalin himself. Since then, the communist systems gradually transformed into the system of nomenklatura. Irrespective of these changes, Poland still witnessed a wave of repressions, which resulted from the threats continuously looming over the communist authorities – combating the anti-communist underground movement, fighting with the Ukrainian Insurgent Army, the Polish government-in-exile, possible revisionism of borders, social discontent related to the socio-political reforms. Hence, a great number of convictions for espionage at that time could be ascribed to purely political sentences. Moreover, equally significant was the fact that the then judicial practice was preoccupied assessing negatively any contacts and relations with foreigners. This excessive number of convictions could ensue from other criminal-law provisions, which applied with respect to the crimes against the State, including espionage. What is also important is the fact that in the Stalin’s period the judiciary personnel acquired their skills and qualifications through intensive courses in law with the predominant spirit of the theory of evidence and law by Andrey Vyshinsky. Additionally, by the decree of 1944 the Penal Code of the Polish Armed Forces was introduced; the code envisaged the increase in the number of offences classified as penalised with death penalty, whereas the high treason was subject to the military jurisdiction (the civilians were prosecuted in military courts till 1955; the espionage, however, still stood under the military jurisdiction). In 1946, there was introduced the Decree on particularly dangerous crimes in the period of the State’s recovery, which was later called a Small Penal Code. 2) The interest that foreign intelligence services expressed in relation to Poland was similar to the one they had in all countries of Eastern and Central Europe. In the case of Poland, it should be noted that foreign intelligence services recruited Polish citizens who had previously stayed abroad and after WWII returned to their home country. The services also gathered information from Poles staying in immigrant camps (e.g. in FRG). The activity of the American intelligence service on the territory of FRG and West Berlin played a key role. The documents of the Ministry of Internal Affairs pointed to the global range of this activity, e.g. through the recruitment of Polish sailors in the ports of the Netherlands, Japan, etc. In line with the development in the 1970s, espionage, which had so far concentrated on the defence and strategic sectors, became focused on science and technology of the People’s Republic of Poland. The acquisition of collaborators in academic circles was much easier, as PRL opened to academic exchange. Due to the system of visas, the process of candidate selection for intelligence services (e.g. the American) began in embassies. In the 1980s, the activity of the foreign intelligence services concentrated on the specific political situation in Poland, i.e. the growing significance of the “Solidarity” social movement. 3) The specificity of the American intelligence activity against Poland was related to the composition of the residency staff, which was the largest in comparison to other Western countries. The wide range of these activities can be proved by the quantitative data of convictions for espionage in the years 1944-1984 (however, one has to bear in mind the factors mentioned earlier in the text, which led to the misinterpretation of these data). Analysing the data and the documents prepared by the Ministry of Internal Affairs, one should treat them with caution, as, frequently, the Polish counter-intelligence service used to classify the ordinary diplomatic practice and any contacts with foreigners as espionage threats. It is clearly visible in the language of the training materials concerned with “secret service methods of the intelligence activity” as well as in the documents on operational activities of the Security Service in relation to foreigners. The level of interest the USA had in Poland was mirrored in the classification of diplomatic posts, according to which Warsaw occupied the second place (the so-called Group “B”) on the three-point scale. The CIA experienced spectacular defeats during their activity in Poland: supporting the Polish underground anti-communist organisation Freedom and Independence and the so-called Munich-Berg episode (both cases took place in the 1950s). The text focuses only on selected issues related to the espionage activities against Poland. Similarly, the analysis of the problem has been based on selected sources, which has limited the research scope - however, it was not the aim of the author to present the espionage activity against Poland in a comprehensive way. In order to assess the real threat posed by the espionage activity, one should analyse the case of persons convicted for espionage in the period 1944-1989, as the available quantitative data, mentioned in the text, cannot constitute an explicit benchmark for the scale of espionage activity. The inaccuracies in the interpretation of data and variables, which can affect the evaluation of this phenomenon, have been pointed out in the text.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Química, Programa de Pós-Graduação em Química, 2015.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Química, Programa de Pós-Graduação em Química, 2015.
Resumo:
This article is concerned with the numerical detection of bifurcation points of nonlinear partial differential equations as some parameter of interest is varied. In particular, we study in detail the numerical approximation of the Bratu problem, based on exploiting the symmetric version of the interior penalty discontinuous Galerkin finite element method. A framework for a posteriori control of the discretization error in the computed critical parameter value is developed based upon the application of the dual weighted residual (DWR) approach. Numerical experiments are presented to highlight the practical performance of the proposed a posteriori error estimator.
Resumo:
Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.
Resumo:
This lecture course covers the theory of so-called duality-based a posteriori error estimation of DG finite element methods. In particular, we formulate consistent and adjoint consistent DG methods for the numerical approximation of both the compressible Euler and Navier-Stokes equations; in the latter case, the viscous terms are discretized based on employing an interior penalty method. By exploiting a duality argument, adjoint-based a posteriori error indicators will be established. Moreover, application of these computable bounds within automatic adaptive finite element algorithms will be developed. Here, a variety of isotropic and anisotropic adaptive strategies, as well as $hp$-mesh refinement will be investigated.
Resumo:
Purpose – Curve fitting from unordered noisy point samples is needed for surface reconstruction in many applications -- In the literature, several approaches have been proposed to solve this problem -- However, previous works lack formal characterization of the curve fitting problem and assessment on the effect of several parameters (i.e. scalars that remain constant in the optimization problem), such as control points number (m), curve degree (b), knot vector composition (U), norm degree (k), and point sample size (r) on the optimized curve reconstruction measured by a penalty function (f) -- The paper aims to discuss these issues -- Design/methodology/approach - A numerical sensitivity analysis of the effect of m, b, k and r on f and a characterization of the fitting procedure from the mathematical viewpoint are performed -- Also, the spectral (frequency) analysis of the derivative of the angle of the fitted curve with respect to u as a means to detect spurious curls and peaks is explored -- Findings - It is more effective to find optimum values for m than k or b in order to obtain good results because the topological faithfulness of the resulting curve strongly depends on m -- Furthermore, when an exaggerate number of control points is used the resulting curve presents spurious curls and peaks -- The authors were able to detect the presence of such spurious features with spectral analysis -- Also, the authors found that the method for curve fitting is robust to significant decimation of the point sample -- Research limitations/implications - The authors have addressed important voids of previous works in this field -- The authors determined, among the curve fitting parameters m, b and k, which of them influenced the most the results and how -- Also, the authors performed a characterization of the curve fitting problem from the optimization perspective -- And finally, the authors devised a method to detect spurious features in the fitting curve -- Practical implications – This paper provides a methodology to select the important tuning parameters in a formal manner -- Originality/value - Up to the best of the knowledge, no previous work has been conducted in the formal mathematical evaluation of the sensitivity of the goodness of the curve fit with respect to different possible tuning parameters (curve degree, number of control points, norm degree, etc.)
Resumo:
This article analyses the interrelationship between educational mismatch, wages and job satisfaction in the Spanish tourism sector in the first years of the global economic crisis. It is shown that there is a much higher incidence of over-education among workers in the Spanish tourism sector than in the rest of the economy despite this sector recording lower educational levels. This study estimates two models to analyse the influence of the educational mismatch on wages and job satisfaction for workers in the tourism industry and for the Spanish economy as a whole. The first model shows that in the tourism sector, the wage penalty associated with over-education is approximately 10%. The second reveals that in the tourism sector the levels of satisfaction of over-educated workers are considerably lower than those corresponding to workers well assigned. With respect to the differences between tourism and the overall economy in both aspects, the wage penalty is substantially lower in the case of tourism industries and the effect of over-education on job satisfaction is very similar to that of the economy as a whole in a context where both wages and the private returns to education are considerably lower in the tourism sector.
Resumo:
Wingtip vortices are created by flying airplanes due to lift generation. The vortex interaction with the trailing aircraft has sparked researchers’ interest to develop an efficient technique to destroy these vortices. Different models have been used to describe the vortex dynamics and they all show that, under real flight conditions, the most unstable modes produce a very weak amplification. Another linear instability mechanism that can produce high energy gains in short times is due to the non-normality of the system. Recently, it has been shown that these non-normal perturbations also produce this energy growth when they are excited with harmonic forcing functions. In this study, we analyze numerically the nonlinear evolution of a spatially, pointwise and temporally forced perturbation, generated by a synthetic jet at a given radial distance from the vortex core. This type of perturbation is able to produce high energy gains in the perturbed base flow (10^3), and is also a suitable candidate for use in engineering applications. The flow field is solved for using fully nonlinear three-dimensional direct numerical simulation with a spectral multidomain penalty method model. Our novel results show that the nonlinear effects are able to produce locally small bursts of instability that reduce the intensity of the primary vortex.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.