906 resultados para EQUATION-ERROR MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the objective to improve the reactor physics calculation on a 2D and 3D nuclear reactor via the Diffusion Equation, an adaptive automatic finite element remeshing method, based on the elementary area (2D) or volume (3D) constraints, has been developed. The adaptive remeshing technique, guided by a posteriori error estimator, makes use of two external mesh generator programs: Triangle and TetGen. The use of these free external finite element mesh generators and an adaptive remeshing technique based on the current field continuity show that they are powerful tools to improve the neutron flux distribution calculation and by consequence the power solution of the reactor core even though they have a minor influence on the critical coefficient of the calculated reactor core examples. Two numerical examples are presented: the 2D IAEA reactor core numerical benchmark and the 3D model of the Argonauta research reactor, built in Brasil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study the notion of degree forsubmanifolds embedded in an equiregular sub-Riemannian manifold and we provide the definition of their associated area functional. In this setting we prove that the Hausdorff dimension of a submanifold coincides with its degree, as stated by Gromov. Using these general definitions we compute the first variation for surfaces embedded in low dimensional manifolds and we obtain the partial differential equation associated to minimal surfaces. These minimal surfaces have several applications in the neurogeometry of the visual cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop the a posteriori error estimation of interior penalty discontinuous Galerkin discretizations for H(curl)-elliptic problems that arise in eddy current models. Computable upper and lower bounds on the error measured in terms of a natural (mesh-dependent) energy norm are derived. The proposed a posteriori error estimator is validated by numerical experiments, illustrating its reliability and efficiency for a range of test problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to the significance of the econometric models in foreign exchange market, the purpose of this research is to give a closer examination on some important issues in this area. The research covers exchange rate pass-through into import prices, liquidity risk and expected returns in the currency market, and the common risk factors in currency markets. Firstly, with the significant of the exchange rate pass-through in financial economics, the first empirical chapter studies on the degree of exchange rate pass-through into import in emerging economies and developed countries in panel evidences for comparison covering the time period of 1970-2009. The pooled mean group estimation (PMGE) is used for the estimation to investigate the short run coefficients and error variance. In general, the results present that the import prices are affected positively, though incompletely, by the exchange rate. Secondly, the following study addresses the question whether there is a relationship between cross-sectional differences in foreign exchange returns and the sensitivities of the returns to fluctuations in liquidity, known as liquidity beta, by using a unique dataset of weekly order flow. Finally, the last study is in keeping with the study of Lustig, Roussanov and Verdelhan (2011), which shows that the large co-movement among exchange rates of different currencies can explain a risk-based view of exchange rate determination. The exploration on identifying a slope factor in exchange rate changes is brought up. The study initially constructs monthly portfolios of currencies, which are sorted on the basis of their forward discounts. The lowest interest rate currencies are contained in the first portfolio and the highest interest rate currencies are in the last. The results performs that portfolios with higher forward discounts incline to contain higher real interest rates in overall by considering the first portfolio and the last portfolio though the fluctuation occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The American woodcock (Scolopax minor) population index in North America has declined 0.9% a year since 1968 prompting managers to identify priority information and management needs for the species (Sauer et al 2008). Managers identified a need for a population model that better informs on the status of American woodcock populations (Case et al. 2010). Population reconstruction techniques use long-term age-at-harvest data and harvest effort to estimate abundances with error estimates. Four new models were successfully developed using survey data (1999 to 2013). The optimal model estimates sex specific harvest probability for adult females at 0.148 (SE = 0.017) and all other age-sex cohorts at 0.082 (SE = 0.008) for the most current year 2013. The model estimated a yearly survival rate of 0.528 (SE = 0.008). Total abundance ranged from 5,206,000 woodcock in 2007 to 6,075,800 woodcock in 1999. This study represents the first population estimates of woodcock populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aims to develop new numerical and computational tools to study electrochemical transport and diffuse charge dynamics at small scales. Previous efforts at modeling electrokinetic phenomena at scales where the noncontinuum effects become significant have included continuum models based on the Poisson-Nernst-Planck equations and atomic simulations using molecular dynamics algorithms. Neither of them is easy to use or conducive to electrokinetic transport modeling in strong confinement or over long time scales. This work introduces a new approach based on a Langevin equation for diffuse charge dynamics in nanofluidic devices, which incorporates features from both continuum and atomistic methods. The model is then extended to include steric effects resulting from finite ion size, and applied to the phenomenon of double layer charging in a symmetric binary electrolyte between parallel-plate blocking electrodes, between which a voltage is applied. Finally, the results of this approach are compared to those of the continuum model based on the Poisson-Nernst-Planck equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of accurate modeling techniques for nanoscale thermal transport is an active area of research. Modern day nanoscale devices have length scales of tens of nanometers and are prone to overheating, which reduces device performance and lifetime. Therefore, accurate temperature profiles are needed to predict the reliability of nanoscale devices. The majority of models that appear in the literature obtain temperature profiles through the solution of the Boltzmann transport equation (BTE). These models often make simplifying assumptions about the nature of the quantized energy carriers (phonons). Additionally, most previous work has focused on simulation of planar two dimensional structures. This thesis presents a method which captures the full anisotropy of the Brillouin zone within a three dimensional solution to the BTE. The anisotropy of the Brillouin zone is captured by solving the BTE for all vibrational modes allowed by the Born Von-Karman boundary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is aimed to model and forecast the tourism demand for Mozambique for the period from January 2004 to December 2013 using artificial neural networks models. The number of overnight stays in Hotels was used as representative of the tourism demand. A set of independent variables were experimented in the input of the model, namely: Consumer Price Index, Gross Domestic Product and Exchange Rates, of the outbound tourism markets, South Africa, United State of America, Mozambique, Portugal and the United Kingdom. The best model achieved has 6.5% for Mean Absolute Percentage Error and 0.696 for Pearson correlation coefficient. A model like this with high accuracy of forecast is important for the economic agents to know the future growth of this activity sector, as it is important for stakeholders to provide products, services and infrastructures and for the hotels establishments to adequate its level of capacity to the tourism demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a residual-based a posteriori error indicator for discontinuous Galerkin discretizations of the biharmonic equation with essential boundary conditions. We show that the indicator is both reliable and efficient with respect to the approximation error measured in terms of a natural energy norm, under minimal regularity assumptions. We validate the performance of the indicator within an adaptive mesh refinement procedure and show its asymptotic exactness for a range of test problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We summarise the properties and the fundamental mathematical results associated with basic models which describe coagulation and fragmentation processes in a deterministic manner and in which cluster size is a discrete quantity (an integer multiple of some basic unit size). In particular, we discuss Smoluchowski's equation for aggregation, the Becker-Döring model of simultaneous aggregation and fragmentation, and more general models involving coagulation and fragmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Agronomia e Medicina Veterinária, Programa de Pós-Graduação em Agronegócios, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.