931 resultados para ESTIMATING EQUATIONS METHOD


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Body size is a key determinant of metabolic rate, but logistical constraints have led to a paucity of energetics measurements from large water-breathing animals. As a result, estimating energy requirements of large fish generally relies on extrapolation of metabolic rate from individuals of lower body mass using allometric relationships that are notoriously variable. Swim-tunnel respirometry is the ‘gold standard’ for measuring active metabolic rates in water-breathing animals, yet previous data are entirely derived from body masses <10 kg – at least one order of magnitude lower than the body masses of many top-order marine predators. Here, we describe the design and testing of a new method for measuring metabolic rates of large water-breathing animals: a c. 26 000 L seagoing ‘mega-flume’ swim-tunnel respirometer. We measured the swimming metabolic rate of a 2·1-m, 36-kg zebra shark Stegostoma fasciatum within this new mega-flume and compared the results to data we collected from other S. fasciatum (3·8–47·7 kg body mass) swimming in static respirometers and previously published measurements of active metabolic rate measurements from other shark species. The mega-flume performed well during initial tests, with intra- and interspecific comparisons suggesting accurate metabolic rate measurements can be obtained with this new tool. Inclusion of our data showed that the scaling exponent of active metabolic rate with mass for sharks ranging from 0·13 to 47·7 kg was 0·79; a similar value to previous estimates for resting metabolic rates in smaller fishes. We describe the operation and usefulness of this new method in the context of our current uncertainties surrounding energy requirements of large water-breathing animals. We also highlight the sensitivity of mass-extrapolated energetic estimates in large aquatic animals and discuss the consequences for predicting ecosystem impacts such as trophic cascades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Body size is a key determinant of metabolic rate, but logistical constraints have led to a paucity of energetics measurements from large water-breathing animals. As a result, estimating energy requirements of large fish generally relies on extrapolation of metabolic rate from individuals of lower body mass using allometric relationships that are notoriously variable. Swim-tunnel respirometry is the ‘gold standard’ for measuring active metabolic rates in water-breathing animals, yet previous data are entirely derived from body masses <10 kg – at least one order of magnitude lower than the body masses of many top-order marine predators. Here, we describe the design and testing of a new method for measuring metabolic rates of large water-breathing animals: a c. 26 000 L seagoing ‘mega-flume’ swim-tunnel respirometer. We measured the swimming metabolic rate of a 2·1-m, 36-kg zebra shark Stegostoma fasciatum within this new mega-flume and compared the results to data we collected from other S. fasciatum (3·8–47·7 kg body mass) swimming in static respirometers and previously published measurements of active metabolic rate measurements from other shark species. The mega-flume performed well during initial tests, with intra- and interspecific comparisons suggesting accurate metabolic rate measurements can be obtained with this new tool. Inclusion of our data showed that the scaling exponent of active metabolic rate with mass for sharks ranging from 0·13 to 47·7 kg was 0·79; a similar value to previous estimates for resting metabolic rates in smaller fishes. We describe the operation and usefulness of this new method in the context of our current uncertainties surrounding energy requirements of large water-breathing animals. We also highlight the sensitivity of mass-extrapolated energetic estimates in large aquatic animals and discuss the consequences for predicting ecosystem impacts such as trophic cascades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a hybrid method for dielectric-metal composites that describes the dynamics of the metallic system classically whilst retaining a quantum description of the dielectric. The time-dependent dipole moment of the classical system is mimicked by the introduction of projected equations of motion (PEOM) and the coupling between the two systems is achieved through an effective dipole-dipole interaction. To benchmark this method, we model a test system (semiconducting quantum dot-metal nanoparticle hybrid). We begin by examining the energy absorption rate, showing agreement between the PEOM method and the analytical rotating wave approximation (RWA) solution. We then investigate population inversion and show that the PEOM method provides an accurate model for the interaction under ultrashort pulse excitation where the traditional RWA breaks down.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]The chloride mass balance method was used to estimate the average diffuse groundwater recharge on northeastern Gran Canaria (Canary Islands), where the largest recharge to the volcanic island aquifer occurs. Rainwater was sampled monthly in ten rainwater collectors to determine the bulk deposition rate of chloride for the 2008–2014 period. Average chloride deposition decreases inwardly from more than 10 g·m−2 ·year−1 to about 4 g·m−2 ·year−1 . The application of the chloride mass balance method resulted in an estimated average recharge of about 28 hm3 /year or 92 mm/year (24% of precipitation) in the study area after subtracting chloride loss with surface runoff.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the application of the generalization of the symmetric version of the interior penalty discontinuous Galerkin finite element method to the numerical approximation of the compressible Navier--Stokes equations. In particular, we consider the a posteriori error analysis and adaptive mesh design for the underlying discretization method. Indeed, by employing a duality argument (weighted) Type I a posteriori bounds are derived for the estimation of the error measured in terms of general target functionals of the solution; these error estimates involve the product of the finite element residuals with local weighting terms involving the solution of a certain dual problem that must be numerically approximated. This general approach leads to the design of economical finite element meshes specifically tailored to the computation of the target functional of interest, as well as providing efficient error estimation. Numerical experiments demonstrating the performance of the proposed approach will be presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modelling of diffusive terms in particle methods is a delicate matter and several models were proposed in the literature to take such terms into account. The diffusion velocity method (DVM), originally designed for the diffusion of passive scalars, turns diffusive terms into convective ones by expressing them as a divergence involving a so-called diffusion velocity. In this paper, DVM is extended to the diffusion of vectorial quantities in the three-dimensional Navier–Stokes equations, in their incompressible, velocity–vorticity formulation. The integration of a large eddy simulation (LES) turbulence model is investigated and a DVM general formulation is proposed. Either with or without LES, a novel expression of the diffusion velocity is derived, which makes it easier to approximate and which highlights the analogy with the original formulation for scalar transport. From this statement, DVM is then analysed in one dimension, both analytically and numerically on test cases to point out its good behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kidney transplantation is the treatment of choice for end-stage renal disease. The evaluation of graft function is mandatory in the management of renal transplant recipients. Glomerular filtration rate (GFR), is generally considered the best index of graft function and also a predictor of graft and patient survival. However GFR measurement using inulin clearance, the gold standard for its measurement and exogenous markers such as radiolabeled isotopes ((51)Cr EDTA, (99m)Tc DTPA or (125)I Iothalamate) and non-radioactive contrast agents (Iothalamate or Iohexol), is laborious as well as expensive, being rarely used in clinical practice. Therefore, endogenous markers, such as serum creatinine or cystatin C, are used to estimate kidney function, and equations using these markers adjusted to other variables, mainly demographic, are an attempt to improve accuracy in estimation of GFR (eGFR). Nevertheless, there is some concern about the inability of the available eGFR equations to accurately identify changes in GFR, in kidney transplant recipients. This article will review and discuss the performance and limitations of these endogenous markers and their equations as estimators of GFR in the kidney transplant recipients, and their ability in predicting significant clinical outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we propose a new symmetric version of the interior penalty discontinuous Galerkin finite element method for the numerical approximation of the compressible Navier-Stokes equations. Here, particular emphasis is devoted to the construction of an optimal numerical method for the evaluation of certain target functionals of practical interest, such as the lift and drag coefficients of a body immersed in a viscous fluid. With this in mind, the key ingredients in the construction of the method include: (i) An adjoint consistent imposition of the boundary conditions; (ii) An adjoint consistent reformulation of the underlying target functional of practical interest; (iii) Design of appropriate interior-penalty stabilization terms. Numerical experiments presented within this article clearly indicate the optimality of the proposed method when the error is measured in terms of both the L_2-norm, as well as for certain target functionals. Computational comparisons with other discontinuous Galerkin schemes proposed in the literature, including the second scheme of Bassi & Rebay, cf. [11], the standard SIPG method outlined in [25], and an NIPG variant of the new scheme will be undertaken.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce and analyze a discontinuous Galerkin method for the numerical discretization of a stationary incompressible magnetohydrodynamics model problem. The fluid unknowns are discretized with inf-sup stable discontinuous P^3_{k}-P_{k-1} elements whereas the magnetic part of the equations is approximated by discontinuous P^3_{k}-P_{k+1} elements. We carry out a complete a-priori error analysis and prove that the energy norm error is convergent of order O(h^k) in the mesh size h. We also show that the method is able to correctly capture and resolve the strongest magnetic singularities in non-convex polyhedral domains. These results are verified in a series of numerical experiments.