986 resultados para Boundary Inhomogeneity Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a modulus method for surface families inside a domain in the Heisenberg group and we prove that the stretch map between two Heisenberg spherical rings is a minimiser for the mean distortion among the class of contact quasiconformal maps between these rings which satisfy certain boundary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To improve our understanding of the Asian monsoon system, we developed a hydroclimate reconstruction in a marginal monsoon shoulder region for the period prior to the industrial era. Here, we present the first moisture sensitive tree-ring chronology, spanning 501 years for the Dieshan Mountain area, a boundary region of the Asian summer monsoon in the northeastern Tibetan Plateau. This reconstruction was derived from 101 cores of 68 old-growth Chinese pine (Pinus tabulaeformis) trees. We introduce a Hilbert–Huang Transform (HHT) based standardization method to develop the tree-ring chronology, which has the advantages of excluding non-climatic disturbances in individual tree-ring series. Based on the reliable portion of the chronology, we reconstructed the annual (prior July to current June) precipitation history since 1637 for the Dieshan Mountain area and were able to explain 41.3% of the variance. The extremely dry years in this reconstruction were also found in historical documents and are also associated with El Niño episodes. Dry periods were reconstructed for 1718–1725, 1766–1770 and 1920–1933, whereas 1782–1788 and 1979–1985 were wet periods. The spatial signatures of these events were supported by data from other marginal regions of the Asian summer monsoon. Over the past four centuries, out-of-phase relationships between hydroclimate variations in the Dieshan Mountain area and far western Mongolia were observed during the 1718–1725 and 1766–1770 dry periods and the 1979–1985 wet period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Immersed boundary simulations have been under development for physiological flows, allowing for elegant handling of fluid-structure interaction modelling with large deformations due to retained domain-specific meshing. We couple a structural system in Lagrangian representation that is formulated in a weak form with a Navier-Stokes system discretized through a finite differences scheme. We build upon a proven highly scalable imcompressible flow solver that we extend to handle FSI. We aim at applying our method to investigating the hemodynamics of Aortic Valves. The code is going to be extended to conform to the new hybrid-node supercomputers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the Feynman–Hellman Theorem (resp. the Ward–Takahashi identity). To show the Ward–Takahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lüscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lüscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyzed observations of interstellar neutral helium (ISN He) obtained from the Interstellar Boundary Explorer (IBEX) satellite during its first six years of operation. We used a refined version of the ISN He simulation model, presented in the companion paper by Sokol et al. (2015b), along with a sophisticated data correlation and uncertainty system and parameter fitting method, described in the companion paper by Swaczyna et al. We analyzed the entire data set together and the yearly subsets, and found the temperature and velocity vector of ISN He in front of the heliosphere. As seen in the previous studies, the allowable parameters are highly correlated and form a four-dimensional tube in the parameter space. The inflow longitudes obtained from the yearly data subsets show a spread of similar to 6 degrees, with the other parameters varying accordingly along the parameter tube, and the minimum chi(2) value is larger than expected. We found, however, that the Mach number of the ISN He flow shows very little scatter and is thus very tightly constrained. It is in excellent agreement with the original analysis of ISN He observations from IBEX and recent reanalyses of observations from Ulysses. We identify a possible inaccuracy in the Warm Breeze parameters as the likely cause of the scatter in the ISN He parameters obtained from the yearly subsets, and we suppose that another component may exist in the signal or a process that is not accounted for in the current physical model of ISN He in front of the heliosphere. From our analysis, the inflow velocity vector, temperature, and Mach number of the flow are equal to lambda(ISNHe) = 255 degrees.8 +/- 0 degrees.5, beta(ISNHe) = 5 degrees.16 +/- 0 degrees.10, T-ISNHe = 7440 +/- 260 K, nu(SNHe) = 25.8 +/- 0.4 km s(-1), and M-ISNHe = 5.079 +/- 0.028, with uncertainties strongly correlated along the parameter tube.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The precise cause and timing of the Cretaceous-Paleocene (K-P) mass extinction 65 Ma ago remains a matter of debate. Many advocate that the extinction was caused by a meteorite impact at Chicxulub, Mexico, and a number of potential kill-mechanisms have been proposed for this. Although we now have good constraints on the size of this impact and chemistry of the target rocks, estimates of its environmental consequences are hindered by a lack of knowledge about the obliquity of this impact. An oblique impact is likely to have been far more catastrophic than a sub-vertical one, because greater volumes of volatiles would have been released into the atmosphere. The principal purpose of this study was to characterize shocked quartz within distal K-P ejecta, to investigate whether the quartz distribution carried a signature of the direction and angle of impact. Our analyses show that the total number, maximum and average size of shocked quartz grains all decrease gradually with paleodistance from Chicxulub. We do not find particularly high abundances in Pacific sites relative to Atlantic and European sites, as has been previously reported, and the size-distribution around Chicxulub is relatively symmetric. Ejecta samples at any one site display features that are indicative of a wide range of shock pressures, but the mean degree of shock increases with paleodistance. These shock- and size-distributions are both consistent with the K-P layer having been formed by a single impact at Chicxulub. One site in the South Atlantic contains quartz indicating an anomalously high average shock degree, that may be indicative of an oblique impact with an uprange direction to the southeast +/- 45°. The apparent continuous coverage of proximal ejecta in this quadrant of the crater, however, suggests a relatively high impact angle of >45°. We conclude that some of the more extreme predictions of the environmental consequences of a low-angle impact at Chicxulub are probably not applicable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calving is a major mechanism of ice discharge of the Antarctic and Greenland ice sheets, and a change in calving front position affects the entire stress regime of marine terminating glaciers. The representation of calving front dynamics in a 2-D or 3-D ice sheet model remains non-trivial. Here, we present the theoretical and technical framework for a level-set method, an implicit boundary tracking scheme, which we implement into the Ice Sheet System Model (ISSM). This scheme allows us to study the dynamic response of a drainage basin to user-defined calving rates. We apply the method to Jakobshavn Isbræ, a major marine terminating outlet glacier of the West Greenland Ice Sheet. The model robustly reproduces the high sensitivity of the glacier to calving, and we find that enhanced calving triggers significant acceleration of the ice stream. Upstream acceleration is sustained through a combination of mechanisms. However, both lateral stress and ice influx stabilize the ice stream. This study provides new insights into the ongoing changes occurring at Jakobshavn Isbræ and emphasizes that the incorporation of moving boundaries and dynamic lateral effects, not captured in flow-line models, is key for realistic model projections of sea level rise on centennial timescales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, the aquatic eddy correlation (EC) technique has proven to be a powerful approach for non-invasive measurements of oxygen fluxes across the sediment water interface. Fundamental to the EC approach is the correlation of turbulent velocity and oxygen concentration fluctuations measured with high frequencies in the same sampling volume. Oxygen concentrations are commonly measured with fast responding electrochemical microsensors. However, due to their own oxygen consumption, electrochemical microsensors are sensitive to changes of the diffusive boundary layer surrounding the probe and thus to changes in the ambient flow velocity. The so-called stirring sensitivity of microsensors constitutes an inherent correlation of flow velocity and oxygen sensing and thus an artificial flux which can confound the benthic flux determination. To assess the artificial flux we measured the correlation between the turbulent flow velocity and the signal of oxygen microsensors in a sealed annular flume without any oxygen sinks and sources. Experiments revealed significant correlations, even for sensors designed to have low stirring sensitivities of ~0.7%. The artificial fluxes depended on ambient flow conditions and, counter intuitively, increased at higher velocities because of the nonlinear contribution of turbulent velocity fluctuations. The measured artificial fluxes ranged from 2 - 70 mmol m**-2 d**-1 for weak and very strong turbulent flow, respectively. Further, the stirring sensitivity depended on the sensor orientation towards the flow. Optical microsensors (optodes) that should not exhibit a stirring sensitivity were tested in parallel and did not show any significant correlation between O2 signals and turbulent flow. In conclusion, EC data obtained with electrochemical sensors can be affected by artificial flux and we recommend using optical microsensors in future EC-studies. Flume experiments were conducted in February 2013 at the Institute for Environmental Sciences, University of Koblenz-Landau Landau. Experiments were performed in a closed oval-shaped acrylic glass flume with cross-sectional width of 4 cm and height of 10 cm and total length of 54 cm. The fluid flow was induced by a propeller driven by a motor and mean flow velocities of up to 20 cm s-1 were generated by applying voltages between 0 V and 4 V DC. The flume was completely sealed with an acrylic glass cover. Oxygen sensors were inserted through rubber seal fittings and allowed positioning the sensors with inclinations to the main flow direction of ~60°, ~95° and ~135°. A Clark type electrochemical O2 microsensor with a low stirring sensitivity (0.7%) was tested and a fast-responding needle-type O2 optode (PyroScience GmbH, Germany) was used as reference as optodes should not be stirring sensitive. Instantaneous three-dimensional flow velocities were measured at 7.4 Hz using stereoscopic particle image velocimetry (PIV). The velocity at the sensor tip was extracted. The correlation of the fluctuating O2 sensor signals and the fluctuating velocities was quantified with a cross-correlation analysis. A significant cross-correlation is equivalent to a significant artificial flux. For a total of 18 experiments the flow velocity was adjusted between 1.7 and 19.2 cm s**-1, and 3 different orientations of the electrochemical sensor were tested with inclination angles of ~60°, ~95° and ~135° with respect to the main flow direction. In experiments 16-18, wavelike flow was induced, whereas in all other experiments the motor was driven by constant voltages. In 7 experiments, O2 was additionally measured by optodes. Although performed simultaneously with the electrochemical sensor, optode measurements are listed as separate experiments (denoted by the attached 'op' in the filename), because the velocity time series was extracted at the optode tip, located at a different position in the flume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the classical operators of mathematical physics the Laplacian plays an important role due to the number of different situations that can be modelled by it. Because of this a great effort has been made by mathematicians as well as by engineers to master its properties till the point that nearly everything has been said about them from a qualitative viewpoint. Quantitative results have also been obtained through the use of the new numerical techniques sustained by the computer. Finite element methods and boundary techniques have been successfully applied to engineering problems as can be seen in the technical literature (for instance [ l ] , [2], [3] . Boundary techniques are especially advantageous in those cases in which the main interest is concentrated on what is happening at the boundary. This situation is very usual in potential problems due to the properties of harmonic functions. In this paper we intend to show how a boundary condition different from the classical, but physically sound, is introduced without any violence in the discretization frame of the Boundary Integral Equation Method. The idea will be developed in the context of heat conduction in axisymmetric problems but it is hoped that its extension to other situations is straightforward. After the presentation of the method several examples will show the capabilities of modelling a physical problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent decades, meshless methods (MMs), like the element-free Galerkin method (EFGM), have been widely studied and interesting results have been reached when solving partial differential equations. However, such solutions show a problem around boundary conditions, where the accuracy is not adequately achieved. This is caused by the use of moving least squares or residual kernel particle method methods to obtain the shape functions needed in MM, since such methods are good enough in the inner of the integration domains, but not so accurate in boundaries. This way, Bernstein curves, which are a partition of unity themselves,can solve this problem with the same accuracy in the inner area of the domain and at their boundaries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reverberation chambers are well known for providing a random-like electric field distribution. Detection of directivity or gain thereof requires an adequate procedure and smart post-processing. In this paper, a new method is proposed for estimating the directivity of radiating devices in a reverberation chamber (RC). The method is based on the Rician K-factor whose estimation in an RC benefits from recent improvements. Directivity estimation relies on the accurate determination of the K-factor with respect to a reference antenna. Good agreement is reported with measurements carried out in near-field anechoic chamber (AC) and using a near-field to far-field transformation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents the possibility of implementing a p-adaptive process with the B.E.M. Although the exemples show that good results can be obtained with a limited amount of storage and with the simple ideas explained above, more research is needed in order to improve the two main problems of the method, i.e.: the criteria of where to refine and until what degree. Mathematically based reasoning is still lacking and will be useful to simplify the decission making. Nevertheless the method seems promising and, we hope, opens a path for a series of research lines of maximum interest. Although the paper has dealt only with plane potential problem the extension to plane elasticity as well as to 3-D potential problem is straight-forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In previous BEM Conferences , the concepts, developments and organisation of the p-adaptive philosophy have been presented by the authors, as well as some interesting features of the hierarchisation of the solution, accuracy estimates and numerical computations optimization. This current paper is devoted to presenting some new developments and aplications in linear elastostatics, with emphasis on: a ) Efficient computation of influence coefficients, b) Efficient evaluation of the residuals by taking advantage of the hierarchy of the interpolation functions and e) New results regarding estimators and convergence ratios.In addition, several practical examples will be shown and discussed in order to point out the advantages of the method .