998 resultados para solution accuracy


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The artificial dissipation effects in some solutions obtained with a Navier-Stokes flow solver are demonstrated. The solvers were used to calculate the flow of an artificially dissipative fluid, which is a fluid having dissipative properties which arise entirely from the solution method itself. This was done by setting the viscosity and heat conduction coefficients in the Navier-Stokes solvers to zero everywhere inside the flow, while at the same time applying the usual no-slip and thermal conducting boundary conditions at solid boundaries. An artificially dissipative flow solution is found where the dissipation depends entirely on the solver itself. If the difference between the solutions obtained with the viscosity and thermal conductivity set to zero and their correct values is small, it is clear that the artificial dissipation is dominating and the solutions are unreliable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present paper a study is made in order to find an algorithm that can calculate coplanar orbital maneuvers for an artificial satellite. The idea is to find a method that is fast enough to be combined with onboard orbit determination using GPS data collected from a receiver that is located in the satellite. After a search in the literature, three algorithms are selected to be tested. Preliminary studies show that one of them (the so called Minimum Delta-V Lambert Problem) has several advantages over the two others, both in terms of accuracy and time required for processing. So, this algorithm is implemented and tested numerically combined with the orbit determination procedure. Some adjustments are performed in this algorithm in the present paper to allow its use in real-time onboard applications. Considering the whole maneuver, first of all a simplified and compact algorithm is used to estimate in real-time and onboard the artificial satellite orbit using the GPS measurements. By using the estimated orbit as the initial one and the information of the final desired orbit (from the specification of the mission) as the final one, a coplanar bi-impulsive maneuver is calculated. This maneuver searches for the minimum fuel consumption. Two kinds of maneuvers are performed, one varying only the semi major axis and the other varying the semi major axis and the eccentricity of the orbit, simultaneously. The possibilities of restrictions in the locations to apply the impulses are included, as well as the possibility to control the relation between the processing time and the solution accuracy. Those are the two main reasons to recommend this method for use in the proposed application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In previous BEM Conferences , the concepts, developments and organisation of the p-adaptive philosophy have been presented by the authors, as well as some interesting features of the hierarchisation of the solution, accuracy estimates and numerical computations optimization. This current paper is devoted to presenting some new developments and aplications in linear elastostatics, with emphasis on: a ) Efficient computation of influence coefficients, b) Efficient evaluation of the residuals by taking advantage of the hierarchy of the interpolation functions and e) New results regarding estimators and convergence ratios.In addition, several practical examples will be shown and discussed in order to point out the advantages of the method .

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis concerns mixed flows (which are characterized by the simultaneous occurrence of free-surface and pressurized flow in sewers, tunnels, culverts or under bridges), and contributes to the improvement of the existing numerical tools for modelling these phenomena. The classic Preissmann slot approach is selected due to its simplicity and capability of predicting results comparable to those of a more recent and complex two-equation model, as shown here with reference to a laboratory test case. In order to enhance the computational efficiency, a local time stepping strategy is implemented in a shock-capturing Godunov-type finite volume numerical scheme for the integration of the de Saint-Venant equations. The results of different numerical tests show that local time stepping reduces run time significantly (between −29% and −85% CPU time for the test cases considered) compared to the conventional global time stepping, especially when only a small region of the flow field is surcharged, while solution accuracy and mass conservation are not impaired. The second part of this thesis is devoted to the modelling of the hydraulic effects of potentially pressurized structures, such as bridges and culverts, inserted in open channel domains. To this aim, a two-dimensional mixed flow model is developed first. The classic conservative formulation of the 2D shallow water equations for free-surface flow is adapted by assuming that two fictitious vertical slots, normally intersecting, are added on the ceiling of each integration element. Numerical results show that this schematization is suitable for the prediction of 2D flooding phenomena in which the pressurization of crossing structures can be expected. Given that the Preissmann model does not allow for the possibility of bridge overtopping, a one-dimensional model is also presented in this thesis to handle this particular condition. The flows below and above the deck are considered as parallel, and linked to the upstream and downstream reaches of the channel by introducing suitable internal boundary conditions. The comparison with experimental data and with the results of HEC-RAS simulations shows that the proposed model can be a useful and effective tool for predicting overtopping and backwater effects induced by the presence of bridges and culverts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The analysis and prediction of the dynamic behaviour of s7ructural components plays an important role in modern engineering design. :n this work, the so-called "mixed" finite element models based on Reissnen's variational principle are applied to the solution of free and forced vibration problems, for beam and :late structures. The mixed beam models are obtained by using elements of various shape functions ranging from simple linear to complex cubic and quadratic functions. The elements were in general capable of predicting the natural frequencies and dynamic responses with good accuracy. An isoparametric quadrilateral element with 8-nodes was developed for application to thin plate problems. The element has 32 degrees of freedom (one deflection, two bending and one twisting moment per node) which is suitable for discretization of plates with arbitrary geometry. A linear isoparametric element and two non-conforming displacement elements (4-node and 8-node quadrilateral) were extended to the solution of dynamic problems. An auto-mesh generation program was used to facilitate the preparation of input data required by the 8-node quadrilateral elements of mixed and displacement type. Numerical examples were solved using both the mixed beam and plate elements for predicting a structure's natural frequencies and dynamic response to a variety of forcing functions. The solutions were compared with the available analytical and displacement model solutions. The mixed elements developed have been found to have significant advantages over the conventional displacement elements in the solution of plate type problems. A dramatic saving in computational time is possible without any loss in solution accuracy. With beam type problems, there appears to be no significant advantages in using mixed models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Methods of solving the neuro-electromagnetic inverse problem are examined and developed, with specific reference to the human visual cortex. The anatomy, physiology and function of the human visual system are first reviewed. Mechanisms by which the visual cortex gives rise to external electric and magnetic fields are then discussed, and the forward problem is described mathematically for the case of an isotropic, piecewise homogeneous volume conductor, and then for an anisotropic, concentric, spherical volume conductor. Methods of solving the inverse problem are reviewed, before a new technique is presented. This technique combines prior anatomical information gained from stereotaxic studies, with a probabilistic distributed-source algorithm to yield accurate, realistic inverse solutions. The solution accuracy is enhanced by using both visual evoked electric and magnetic responses simultaneously. The numerical algorithm is then modified to perform equivalent current dipole fitting and minimum norm estimation, and these three techniques are implemented on a transputer array for fast computation. Due to the linear nature of the techniques, they can be executed on up to 22 transputers with close to linear speedup. The latter part of the thesis describes the application of the inverse methods to the analysis of visual evoked electric and magnetic responses. The CIIm peak of the pattern onset evoked magnetic response is deduced to be a product of current flowing away from the surface areas 17, 18 and 19, while the pattern reversal P100m response originates in the same areas, but from oppositely directed current. Cortical retinotopy is examined using sectorial stimuli, the CI and CIm ;peaks of the pattern onset electric and magnetic responses are found to originate from areas V1 and V2 simultaneously, and they therefore do not conform to a simple cruciform model of primary visual cortex.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study compared the accuracy of three electronic apex locators (EALs) - Elements Diagnostic®, Root ZX® and Apex DSP® - in the presence of different irrigating solutions (0.9% saline solution and 1% sodium hypochlorite). The electronic measurements were carried out by three examiners, using twenty extracted human permanent maxillary central incisors. A size 10 K file was introduced into the root canals until reaching the 0.0 mark, and was subsequently retracted to the 1.0 mark. The gold standard (GS) measurement was obtained by combining visual and radiographic methods, and was set 1 mm short of the apical foramen. Electronic length values closer to the GS (± 0.5 mm) were considered as accurate measures. Intraclass correlation coefficients (ICCs) were used to verify inter-examiner agreement. The comparison among the EALs was performed using the McNemar and Kruskal-Wallis tests (p < 0.05). The ICCs were generally high, ranging from 0.8859 to 0.9657. Similar results were observed for the percentage of electronic measurements closer to the GS obtained with the Elements Diagnostic® and the Root ZX® EALs (p > 0.05), independent of the irrigating solutions used. The measurements taken with these two EALs were more accurate than those taken with Apex DSP®, regardless of the irrigating solution used (p < 0.05). It was concluded that Elements Diagnostic® and Root ZX® apex locators are able to locate the cementum-dentine junction more precisely than Apex DSP®. The presence of irrigating solutions does not interfere with the performance of the EALs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents the study and development of a combined fault location scheme for three-terminal transmission lines using wavelet transforms (WTs). The methodology is based on the low- and high-frequency components of the transient signals originated from fault situations registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of travelling waves of voltages and/or currents from the fault point to the terminals, as well as estimate the fundamental frequency components. A new approach presents a reliable and accurate fault location scheme combining some different solutions. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The combined algorithm was tested for different fault conditions by simulations using the ATP (Alternative Transients Program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the increasing prevalence of salinity world-wide, the measurement of exchangeable cation concentrations in saline soils remains problematic. Two soil types (Mollisol and Vertisol) were equilibrated with a range of sodium adsorption ratio (SAR) solutions at various ionic strengths. The concentrations of exchangeable cations were then determined using several different types of methods, and the measured exchangeable cation concentrations compared to reference values. At low ionic strength (low salinity), the concentration of exchangeable cations can be accurately estimated from the total soil extractable cations. In saline soils, however, the presence of soluble salts in the soil solution precludes the use of this method. Leaching of the soil with a pre-wash solution (such as alcohol) was found to effectively remove the soluble salts from the soil, thus allowing the accurate measurement of the effective cation exchange capacity (ECEC). However, the dilution associated with this pre-washing increased the exchangeable Ca concentrations while simultaneously decreasing exchangeable Na. In contrast, when calculated as the difference between the total extractable cations and the soil solution cations, good correlations were found between the calculated exchangeable cation concentrations and the reference values for both Na (Mollisol: y=0.873x and Vertisol: y=0.960x) and Ca (Mollisol: y=0.901x and Vertisol: y=1.05x). Therefore, for soils with a soil solution ionic strength greater than 50 mM (electrical conductivity of 4 dS/m) (in which exchangeable cation concentrations are overestimated by the assumption they can be estimated as the total extractable cations), concentrations can be calculated as the difference between total extractable cations and soluble cations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.