997 resultados para phase-error


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broadcasting systems are networks where the transmission is received by several terminals. Generally broadcast receivers are passive devices in the network, meaning that they do not interact with the transmitter. Providing a certain Quality of Service (QoS) for the receivers in heterogeneous reception environment with no feedback is not an easy task. Forward error control coding can be used for protection against transmission errors to enhance the QoS for broadcast services. For good performance in terrestrial wireless networks, diversity should be utilized. The diversity is utilized by application of interleaving together with the forward error correction codes. In this dissertation the design and analysis of forward error control and control signalling for providing QoS in wireless broadcasting systems are studied. Control signaling is used in broadcasting networks to give the receiver necessary information on how to connect to the network itself and how to receive the services that are being transmitted. Usually control signalling is considered to be transmitted through a dedicated path in the systems. Therefore, the relationship of the signaling and service data paths should be considered early in the design phase. Modeling and simulations are used in the case studies of this dissertation to study this relationship. This dissertation begins with a survey on the broadcasting environment and mechanisms for providing QoS therein. Then case studies present analysis and design of such mechanisms in real systems. The mechanisms for providing QoS considering signaling and service data paths and their relationship at the DVB-H link layer are analyzed as the first case study. In particular the performance of different service data decoding mechanisms and optimal signaling transmission parameter selection are presented. The second case study investigates the design of signaling and service data paths for the more modern DVB-T2 physical layer. Furthermore, by comparing the performances of the signaling and service data paths by simulations, configuration guidelines for the DVB-T2 physical layer signaling are given. The presented guidelines can prove useful when configuring DVB-T2 transmission networks. Finally, recommendations for the design of data and signalling paths are given based on findings from the case studies. The requirements for the signaling design should be derived from the requirements for the main services. Generally, these requirements for signaling should be more demanding as the signaling is the enabler for service reception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Shadow Moiré fringe patterns are level lines of equal depth generated by interference between a master grid and its shadow projected on the surface. In simplistic approach, the minimum error is about the order of the master grid pitch, that is, always larger than 0,1 mm, resulting in an experimental technique of low precision. The use of a phase shift increases the accuracy of the Shadow Moiré technique. The current work uses the phase shifting method to determine the surfaces three-dimensional shape using isothamic fringe patterns and digital image processing. The current study presents the method and applies it to images obtained by simulation for error evaluation, as well as to a buckled plate, obtaining excellent results. The method hands itself particularly useful to decrease the errors in the interpretation of the Moiré fringes that can adversely affect the calculations of displacements in pieces containing many concave and convex regions in relatively small areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formal calibration procedure of a phase fraction meter is based on registering the outputs resulting from imposed phase fractions at known flow regimes. This can be straightforwardly done in laboratory conditions, but is rarely the case in industrial conditions, and particularly for on-site applications. Thus, there is a clear need for less restrictive calibration methods regarding to the prior knowledge of the complete set of inlet conditions. A new procedure is proposed in this work for the on-site construction of the calibration curve from total flown mass values of the homogeneous dispersed phase. The solution is obtained by minimizing a convenient error functional, assembled with data from redundant tests to handle the intrinsic ill-conditioned nature of the problem. Numerical simulations performed for increasing error levels demonstrate that acceptable calibration curves can be reconstructed, even from total mass measured within a precision of up to 2%. Consequently, the method can readily be applied, especially in on-site calibration problems in which classical procedures fail due to the impossibility of having a strict control of all the input/output parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’hypothèse générale de ce projet soutient que le système moteur doit performer des transformations sensorimotrices afin de convertir les entrées sensorielles, concernant la position de la cible à atteindre, en commande motrice, afin de produire un mouvement du bras et de la main vers la cible à atteindre. Ce type de conversion doit être fait autant au niveau de la planification du mouvement que pour une éventuelle correction d’erreur de planification ou d’un changement inopiné de la position de la cible. La question de recherche du présent mémoire porte sur le ou les mécanismes, circuits neuronaux, impliqués dans ce type de transformation. Y a-t-il un seul circuit neuronal qui produit l’ensemble des transformations visuomotrices entre les entrées sensorielles et les sorties motrices, avant l’initiation du mouvement et la correction en temps réel du mouvement, lorsqu’une erreur ou un changement inattendu survient suite à l’initiation, ou sont-ils minimalement partiellement indépendants sur le plan fonctionnel? L’hypothèse de travail suppose qu’il n’y ait qu’un seul circuit responsable des transformations sensorimotrices, alors l’analyse des résultats obtenus par les participants devrait démontrer des changements identiques dans la performance pendant la phase de planification du mouvement d’atteinte et la phase de correction en temps réel après l’adaptation à des dissociations sensorimotrices arbitraires. L’approche expérimentale : Dans la perspective d’examiner cette question et vérifier notre hypothèse, nous avons jumelé deux paradigmes expérimentaux. En effet, les mouvements d’atteinte étaient soumis à une dissociation visuomotrice ainsi qu’à de rares essais composés de saut de cible. L’utilisation de dissociation visuomotrice permettait d’évaluer le degré d’adaptation des mécanismes impliqués dans le mouvement atteint. Les sauts de cible avaient l’avantage de permettre d’examiner la capacité d’adaptation à une dissociation visuomotrice des mécanismes impliqués dans la correction du mouvement (miroir : sur l’axe y, ou complète : inversion sur les axes x et y). Les résultats obtenus lors des analyses effectuées dans ce mémoire portent exclusivement sur l’habileté des participants à s’adapter aux deux dissociations visuomotrices à la première phase de planification du mouvement. Les résultats suggèrent que les mécanismes de planification du mouvement possèdent une grande capacité d’adaptation aux deux différentes dissociations visuomotrices. Les conclusions liées aux analyses présentées dans ce mémoire suggèrent que les mécanismes impliqués dans la phase de planification et d’initiation du mouvement parviennent relativement bien à s’adapter aux dissociations visuomotrices, miroir et inverse. Bien que les résultats démontrent une certaine distinction, entre les deux groupes à l’étude, quant aux délais nécessaires à cette adaptation, ils illustrent aussi un taux d’adaptation finale relativement similaire. L’analyse des réponses aux sauts de cible pourra être comparée aux résultats présentés dans ce mémoire afin de répondre à l’hypothèse de travail proposée par l’objectif initial de l’étude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is increasing interest in combining Phases II and III of clinical development into a single trial in which one of a small number of competing experimental treatments is ultimately selected and where a valid comparison is made between this treatment and the control treatment. Such a trial usually proceeds in stages, with the least promising experimental treatments dropped as soon as possible. In this paper we present a highly flexible design that uses adaptive group sequential methodology to monitor an order statistic. By using this approach, it is possible to design a trial which can have any number of stages, begins with any number of experimental treatments, and permits any number of these to continue at any stage. The test statistic used is based upon efficient scores, so the method can be easily applied to binary, ordinal, failure time, or normally distributed outcomes. The method is illustrated with an example, and simulations are conducted to investigate its type I error rate and power under a range of scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A relatively simple, selective, precise and accurate high performance liquid chromatography (HPLC) method based on a reaction of phenylisothiocyanate (PITC) with glucosamine (GL) in alkaline media was developed and validated to determine glucosamine hydrochloride permeating through human skin in vitro. It is usually problematic to develop an accurate assay for chemicals traversing skin because the excellent barrier properties of the tissue ensure that only low amounts of the material pass through the membrane and skin components may leach out of the tissue to interfere with the analysis. In addition, in the case of glucosamine hydrochloride, chemical instability adds further complexity to assay development. The assay, utilising the PITC-GL reaction was refined by optimizing the reaction temperature, reaction time and PITC concentration. The reaction produces a phenylthiocarbarnyl-glucosamine (PTC-GL) adduct which was separated on a reverse-phase (RP) column packed with 5 mu m ODS (C-18) Hypersil particles using a diode array detector (DAD) at 245 nm. The mobile phase was methanol-water-glacial acetic acid (10:89.96:0.04 v/v/v, pH 3.5) delivered to the column at 1 ml min(-1) and the column temperature was maintained at 30 degrees C Using a saturated aqueous solution of glucosamine hydrochloride, in vitro permeation studies were performed at 32 +/- 1 degrees C over 48 h using human epidermal membranes prepared by a heat separation method and mounted in Franz-type diffusion cells with a diffusional area 2.15 +/- 0.1 cm(2). The optimum derivatisation reaction conditions for reaction temperature, reaction time and PITC concentration were found to be 80 degrees C, 30 min and 1 % v/v, respectively. PTC-Gal and GL adducts eluted at 8.9 and 9.7 min, respectively. The detector response was found to be linear in the concentration range 0-1000 mu g ml(-1). The assay was robust with intra- and inter-day precisions (described as a percentage of relative standard deviation, %R.S.D.) < 12. Intra- and inter-day accuracy (as a percentage of the relative error, %RE) was <=-5.60 and <=-8.00, respectively. Using this assay, it was found that GL-HCI permeates through human skin with a flux 1.497 +/- 0.42 mu g cm(-2) h(-1), a permeability coefficient of 5.66 +/- 1.6 x 10(-6) cm h(-1) and with a lag time of 10.9 +/- 4.6 h. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This correspondence proposes a new algorithm for the OFDM joint data detection and phase noise (PHN) cancellation for constant modulus modulations. We highlight that it is important to address the overfitting problem since this is a major detrimental factor impairing the joint detection process. In order to attack the overfitting problem we propose an iterative approach based on minimum mean square prediction error (MMSPE) subject to the constraint that the estimated data symbols have constant power. The proposed constrained MMSPE algorithm (C-MMSPE) significantly improves the performance of existing approaches with little extra complexity being imposed. Simulation results are also given to verify the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-resolved kinetic studies of silylene, SiH2, generated by laser flash photolysis of phenylsilane, have been carried out to obtain rate constants for its bimolecular reactions with oxirane, oxetane, and tetrahydrofuran (THF). The reactions were studied in the gas phase over the pressure range 1-100 Torr in SF6 bath gas, at four or five temperatures in the range 294-605 K. All three reactions showed pressure dependences characteristic of third-body-assisted association reactions with, surprisingly, SiH2 + oxirane showing the least and SiH2 + THF showing the most pressure dependence. The second-order rate constants obtained by extrapolation to the high-pressure limits at each temperature fitted the Arrhenius equations where the error limits are single standard deviations: log(k(oxirane)(infinity)/cm(3) molecule(-1) s(-1)) = (-11.03 +/- 0.07) + (5.70 +/- 0.51) kJ mol(-1)/RT In 10 log(k(oxetane)(infinity)/cm(3) molecule(-1) s(-1)) = (-11.17 +/- 0.11) + (9.04 +/- 0.78) kJ mol(-1)/RT In 10 log(k(THF)(infinity)/cm(3) molecule(-1) s(-1)) = (-10.59 +/- 0.10) + (5.76 +/- 0.65) kJ mol(-1)/RT In 10 Binding-energy values of 77, 97, and 92 kJ mol(-1) have been obtained for the donor-acceptor complexes of SiH2 with oxirane, oxetane, and THF, respectively, by means of quantum chemical (ab initio) calculations carried Out at the G3 level. The use of these values to model the pressure dependences of these reactions, via RRKM theory, provided a good fit only in the case of SiH2 + THF. The lack of fit in the other two cases is attributed to further reaction pathways for the association complexes of SiH2 with oxirane and oxetane. The finding of ethene as a product of the SiH2 + oxirane reaction supports a pathway leading to H2Si=O + C2H4 predicted by the theoretical calculations of Apeloig and Sklenak.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-resolved kinetic studies of the reactions of silylene, SiH2, and dideutero-silylene, SiD2, generated by laser. ash photolysis of phenylsilane and phenylsilane-d(3), respectively, have been carried out to obtain rate coefficients for their bimolecular reactions with 2-butyne, CH3C CCH3. The reactions were studied in the gas phase over the pressure range 1-100 Torr in SF6 bath gas at five temperatures in the range 294-612 K. The second-order rate coefficients, obtained by extrapolation to the high pressure limits at each temperature, fitted the Arrhenius equations where the error limits are single standard deviations: log(k(H)(infinity)/cm(3) molecule(-1) s(-1)) = (-9.67 +/- 0.04) + (1.71 +/- 0.33) kJ mol(-1)/RTln10 log(k(D)(infinity)/cm(3) molecule(-1) s(-1)) = (-9.65 +/- 0.01) + (1.92 +/- 0.13) kJ mol(-1)/RTln10 Additionally, pressure-dependent rate coefficients for the reaction of SiH2 with 2-butyne in the presence of He (1-100 Torr) were obtained at 301, 429 and 613 K. Quantum chemical (ab initio) calculations of the SiC4H8 reaction system at the G3 level support the formation of 2,3-dimethylsilirene [cyclo-SiH2C(CH3)=C(CH3)-] as the sole end product. However, reversible formation of 2,3-dimethylvinylsilylene [CH3CH=C(CH3)SiH] is also an important process. The calculations also indicate the probable involvement of several other intermediates, and possible products. RRKM calculations are in reasonable agreement with the pressure dependences at an enthalpy value for 2,3-dimethylsilirene fairly close to that suggested by the ab initio calculations. The experimental isotope effects deviate significantly from those predicted by RRKM theory. The differences can be explained by an isotopic scrambling mechanism, involving H - D exchange between the hydrogens of the methyl groups and the D-atoms in the ring in 2,3-dimethylsilirene-1,1-d(2). A detailed mechanism involving several intermediate species, which is consistent with the G3 energy surface, is proposed to account for this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1997, the UK implemented the worlds first commercial digital terrestrial television system. Under the ETS 300 744 standard, the chosen modulation method, COFDM, is assumed to be multipath resilient. Previous work has shown that this is not necessarily the case. It has been shown that the local oscillator required for demodulation from intermediate-frequency to baseband must be very accurate. This paper shows that under multipath conditions, standard methods for obtaining local oscillator phase lock may not be adequate. This paper demonstrates a set of algorithms designed for use with a simple local oscillator circuit which will allow correction for local oscillator phase offset to maintain a low bit error rate with multipath present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Eyjafjallajökull volcano in Iceland erupted explosively on 14 April 2010, emitting a plume of ash into the atmosphere. The ash was transported from Iceland toward Europe where mostly cloud-free skies allowed ground-based lidars at Chilbolton in England and Leipzig in Germany to estimate the mass concentration in the ash cloud as it passed overhead. The UK Met Office's Numerical Atmospheric-dispersion Modeling Environment (NAME) has been used to simulate the evolution of the ash cloud from the Eyjafjallajökull volcano during the initial phase of the ash emissions, 14–16 April 2010. NAME captures the timing and sloped structure of the ash layer observed over Leipzig, close to the central axis of the ash cloud. Relatively small errors in the ash cloud position, probably caused by the cumulative effect of errors in the driving meteorology en route, result in a timing error at distances far from the central axis of the ash cloud. Taking the timing error into account, NAME is able to capture the sloped ash layer over the UK. Comparison of the lidar observations and NAME simulations has allowed an estimation of the plume height time series to be made. It is necessary to include in the model input the large variations in plume height in order to accurately predict the ash cloud structure at long range. Quantitative comparison with the mass concentrations at Leipzig and Chilbolton suggest that around 3% of the total emitted mass is transported as far as these sites by small (<100 μm diameter) ash particles.