926 resultados para no conforming mesh
Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the $p$-version
Resumo:
Plane wave discontinuous Galerkin (PWDG) methods are a class of Trefftz-type methods for the spatial discretization of boundary value problems for the Helmholtz operator $-\Delta-\omega^2$, $\omega>0$. They include the so-called ultra weak variational formulation from [O. Cessenat and B. Després, SIAM J. Numer. Anal., 35 (1998), pp. 255–299]. This paper is concerned with the a priori convergence analysis of PWDG in the case of $p$-refinement, that is, the study of the asymptotic behavior of relevant error norms as the number of plane wave directions in the local trial spaces is increased. For convex domains in two space dimensions, we derive convergence rates, employing mesh skeleton-based norms, duality techniques from [P. Monk and D. Wang, Comput. Methods Appl. Mech. Engrg., 175 (1999), pp. 121–136], and plane wave approximation theory.
Resumo:
In this paper, we extend to the time-harmonic Maxwell equations the p-version analysis technique developed in [R. Hiptmair, A. Moiola and I. Perugia, Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the p-version, SIAM J. Numer. Anal., 49 (2011), 264-284] for Trefftz-discontinuous Galerkin approximations of the Helmholtz problem. While error estimates in a mesh-skeleton norm are derived parallel to the Helmholtz case, the derivation of estimates in a mesh-independent norm requires new twists in the duality argument. The particular case where the local Trefftz approximation spaces are built of vector-valued plane wave functions is considered, and convergence rates are derived.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
Brain activity can be measured with several non-invasive neuroimaging modalities, but each modality has inherent limitations with respect to resolution, contrast and interpretability. It is hoped that multimodal integration will address these limitations by using the complementary features of already available data. However, purely statistical integration can prove problematic owing to the disparate signal sources. As an alternative, we propose here an advanced neural population model implemented on an anatomically sound cortical mesh with freely adjustable connectivity, which features proper signal expression through a realistic head model for the electroencephalogram (EEG), as well as a haemodynamic model for functional magnetic resonance imaging based on blood oxygen level dependent contrast (fMRI BOLD). It hence allows simultaneous and realistic predictions of EEG and fMRI BOLD from the same underlying model of neural activity. As proof of principle, we investigate here the influence on simulated brain activity of strengthening visual connectivity. In the future we plan to fit multimodal data with this neural population model. This promises novel, model-based insights into the brain's activity in sleep, rest and task conditions.
Resumo:
We propose a numerical method to approximate the solution of second order elliptic problems in nonvariational form. The method is of Galerkin type using conforming finite elements and applied directly to the nonvariational (nondivergence) form of a second order linear elliptic problem. The key tools are an appropriate concept of “finite element Hessian” and a Schur complement approach to solving the resulting linear algebra problem. The method is illustrated with computational experiments on three linear and one quasi-linear PDE, all in nonvariational form.
Resumo:
As the integration of vertical axis wind turbines in the built environment is a promising alternative to horizontal axis wind turbines, a 2D computational investigation of an augmented wind turbine is proposed and analysed. In the initial CFD analysis, three parameters are carefully investigated: mesh resolution; turbulence model; and time step size. It appears that the mesh resolution and the turbulence model affect result accuracy; while the time step size examined, for the unsteady nature of the flow, has small impact on the numerical results. In the CFD validation of the open rotor with secondary data, the numerical results are in good agreement in terms of shape. It is, however, observed a discrepancy factor of 2 between numerical and experimental data. Successively, the introduction of an omnidirectional stator around the wind turbine increases the power and torque coefficients by around 30–35% when compared to the open case; but attention needs to be given to the orientation of the stator blades for optimum performance. It is found that the power and torque coefficients of the augmented wind turbine are independent of the incident wind speed considered.
Resumo:
This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3 s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach
Resumo:
We investigated the plume structure of a piezo-electric sprayer system, set up to release ethanol in a wind tunnel, using a fast response mini-photoionizaton detector. We recorded the plume structure of four different piezo-sprayer configurations: the sprayer alone; with a 1.6-mm steel mesh shield; with a 3.2-mm steel mesh shield; and with a 5 cm circular upwind baffle. We measured a 12 × 12-mm core at the center of the plume, and both a horizontal and vertical cross-section of the plume, all at 100-, 200-, and 400-mm downwind of the odor source. Significant differences in plume structure were found among all configurations in terms of conditional relative mean concentration, intermittency, ratio of peak concentration to conditional mean concentration, and cross-sectional area of the plume. We then measured the flight responses of the almond moth, Cadra cautella, to odor plumes generated with the sprayer alone, and with the upwind baffle piezo-sprayer configuration, releasing a 13:1 ratio of (9Z,12E)-tetradecadienyl acetate and (Z)-9-tetradecenyl acetate diluted in ethanol at release rates of 1, 10, 100, and 1,000 pg/min. For each configuration, differences in pheromone release rate resulted in significant differences in the proportions of moths performing oriented flight and landing behaviors. Additionally, there were apparent differences in the moths’ behaviors between the two sprayer configurations, although this requires confirmation with further experiments. This study provides evidence that both pheromone concentration and plume structure affect moth orientation behavior and demonstrates that care is needed when setting up experiments that use a piezo-electric release system to ensure the optimal conditions for behavioral observations.
Resumo:
We use the elliptic reconstruction technique in combination with a duality approach to prove a posteriori error estimates for fully discrete backward Euler scheme for linear parabolic equations. As an application, we combine our result with the residual based estimators from the a posteriori estimation for elliptic problems to derive space-error indicators and thus a fully practical version of the estimators bounding the error in the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$ norm. These estimators, which are of optimal order, extend those introduced by Eriksson and Johnson in 1991 by taking into account the error induced by the mesh changes and allowing for a more flexible use of the elliptic estimators. For comparison with previous results we derive also an energy-based a posteriori estimate for the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$-error which simplifies a previous one given by Lakkis and Makridakis in 2006. We then compare both estimators (duality vs. energy) in practical situations and draw conclusions.
Resumo:
In order to move the nodes in a moving mesh method a time-stepping scheme is required which is ideally explicit and non-tangling (non-overtaking in one dimension (1-D)). Such a scheme is discussed in this paper, together with its drawbacks, and illustrated in 1-D in the context of a velocity-based Lagrangian conservation method applied to first order and second order examples which exhibit a regime change after node compression. An implementation in multidimensions is also described in some detail.
Resumo:
We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.
Resumo:
This guide has been co-authored by Naomi Flynn, an Associate Professor at The University of Reading, working with Chris Pim and Sarah Coles who are specialist advisory teachers with Hampshire’s Ethnic Minority and Traveller Achievement Service (EMTAS). It was constructed with the support of teachers in primary and secondary schools in Hampshire, selected for their existing expertise in teaching EAL learners, who used the guidance for action research during the spring and summer of 2015. The guide is written principally to support teachers and learning support assistants working with EAL learners in any educational setting and who are at any stage of fluency in the learning of English. It will also support senior leaders in their strategic response to the EAL learners in their schools. As with all MESH guides it seeks to share knowledge with professionals in order to support the growth of evidence informed practice that works in promoting the best in pupil outcomes
Resumo:
We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.
Resumo:
In this study we report detailed information on the internal structure of PNIPAM-b-PEG-b-PNIPAM nanoparticles formed from self-assembly in aqueous solutions upon increase in temperature. NMR spectroscopy, light scattering and small-angle neutron scattering (SANS) were used to monitor different stages of nanoparticle formation as a function of temperature, providing insight into the fundamental processes involved. The presence of PEG in a copolymer structure significantly affects the formation of nanoparticles, making their transition to occur over a broader temperature range. The crucial parameter that controls the transition is the ratio of PEG/PNIPAM. For pure PNIPAM, the transition is sharp; the higher the PEG/PNIPAM ratio results in a broader transition. This behavior is explained by different mechanisms of PNIPAM block incorporation during nanoparticle formation at different PEG/PNIPAM ratios. Contrast variation experiments using SANS show that the structure of nanoparticles above cloud point temperatures for PNIPAM-b-PEG-b-PNIPAM copolymers is drastically different from the structure of PNIPAM mesoglobules. In contrast with pure PNIPAM mesoglobules, where solid-like particles and chain network with a mesh size of 1-3 nm are present; nanoparticles formed from PNIPAM-b-PEG-b-PNIPAM copolymers have non-uniform structure with “frozen” areas interconnected by single chains in Gaussian conformation. SANS data with deuterated “invisible” PEG blocks imply that PEG is uniformly distributed inside of a nanoparticle. It is kinetically flexible PEG blocks which affect the nanoparticle formation by prevention of PNIPAM microphase separation.