940 resultados para MESH


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transport and deposition of charged inhaled aerosols in double planar bifurcation representing generation three to five of human respiratory system has been studied under a light activity breathing condition. Both steady and oscillatory laminar inhalation airflow is considered. Particle trajectories are calculated using a Lagrangian reference frame, which is dominated by the fluid force driven by airflow, gravity force and electrostatic forces (both of space and image charge forces). The particle-mesh method is selected to calculate the space charge force. This numerical study investigates the deposition efficiency in the three-dimensional model under various particle sizes, charge values, and inlet particle distribution. Numerical results indicate that particles carrying an adequate level of charge can improve deposition efficiency in the airway model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new numerical modeling of inhaled charge aerosol has been developed based on a modified Weibel's model. Both the velocity profiles (slug and parabolic flows) and the particle distributions (uniform and parabolic distributions) have been considered. Inhaled particles are modeled as a dilute dispersed phase flow in which the particle motion is controlled by fluid force and external forces acting on particles. This numerical study extends the previous numerical studies by considering both space- and image-charge forces. Because of the complex computation of interacting forces due to space-charge effect, the particle-mesh (PM) method is selected to calculate these forces. In the PM technique, the charges of all particles are assigned to the space-charge field mesh, for calculating charge density. The Poisson's equation of the electrostatic potential is then solved, and the electrostatic force acting on individual particle is interpolated. It is assumed that there is no effect of humidity on charged particles. The results show that many significant factors also affect the deposition, such as the volume of particle cloud, the velocity profile and the particle distribution. This study allows a better understanding of electrostatic mechanism of aerosol transport and deposition in human airways.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tropical Cyclone (TC) is normally not studied at the individual level with Global Climate Models (GCMs), because the coarse grid spacing is often deemed insufficient for a realistic representation of the basic underlying processes. GCMs are indeed routinely deployed at low resolution, in order to enable sufficiently long integrations, which means that only large-scale TC proxies are diagnosed. A new class of GCMs is emerging, however, which is capable of simulating TC-type vortexes by retaining a horizontal resolution similar to that of operational NWP GCMs; their integration on the latest supercomputers enables the completion of long-term integrations. The UK-Japan Climate Collaboration and the UK-HiGEM projects have developed climate GCMs which can be run routinely for decades (with grid spacing of 60 km) or centuries (with grid spacing of 90 km); when coupled to the ocean GCM, a mesh of 1/3 degrees provides eddy-permitting resolution. The 90 km resolution model has been developed entirely by the UK-HiGEM consortium (together with its 1/3 degree ocean component); the 60 km atmospheric GCM has been developed by UJCC, in collaboration with the Met Office Hadley Centre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Basic Network transactions specifies that datagram from source to destination is routed through numerous routers and paths depending on the available free and uncongested paths which results in the transmission route being too long, thus incurring greater delay, jitter, congestion and reduced throughput. One of the major problems of packet switched networks is the cell delay variation or jitter. This cell delay variation is due to the queuing delay depending on the applied loading conditions. The effect of delay, jitter accumulation due to the number of nodes along transmission routes and dropped packets adds further complexity to multimedia traffic because there is no guarantee that each traffic stream will be delivered according to its own jitter constraints therefore there is the need to analyze the effects of jitter. IP routers enable a single path for the transmission of all packets. On the other hand, Multi-Protocol Label Switching (MPLS) allows separation of packet forwarding and routing characteristics to enable packets to use the appropriate routes and also optimize and control the behavior of transmission paths. Thus correcting some of the shortfalls associated with IP routing. Therefore MPLS has been utilized in the analysis for effective transmission through the various networks. This paper analyzes the effect of delay, congestion, interference, jitter and packet loss in the transmission of signals from source to destination. In effect the impact of link failures, repair paths in the various physical topologies namely bus, star, mesh and hybrid topologies are all analyzed based on standard network conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive methods which “equidistribute” a given positive weight function are now used fairly widely for selecting discrete meshes. The disadvantage of such schemes is that the resulting mesh may not be smoothly varying. In this paper a technique is developed for equidistributing a function subject to constraints on the ratios of adjacent steps in the mesh. Given a weight function $f \geqq 0$ on an interval $[a,b]$ and constants $c$ and $K$, the method produces a mesh with points $x_0 = a,x_{j + 1} = x_j + h_j ,j = 0,1, \cdots ,n - 1$ and $x_n = b$ such that\[ \int_{xj}^{x_{j + 1} } {f \leqq c\quad {\text{and}}\quad \frac{1} {K}} \leqq \frac{{h_{j + 1} }} {{h_j }} \leqq K\quad {\text{for}}\, j = 0,1, \cdots ,n - 1 . \] A theoretical analysis of the procedure is presented, and numerical algorithms for implementing the method are given. Examples show that the procedure is effective in practice. Other types of constraints on equidistributing meshes are also discussed. The principal application of the procedure is to the solution of boundary value problems, where the weight function is generally some error indicator, and accuracy and convergence properties may depend on the smoothness of the mesh. Other practical applications include the regrading of statistical data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plane wave discontinuous Galerkin (PWDG) methods are a class of Trefftz-type methods for the spatial discretization of boundary value problems for the Helmholtz operator $-\Delta-\omega^2$, $\omega>0$. They include the so-called ultra weak variational formulation from [O. Cessenat and B. Després, SIAM J. Numer. Anal., 35 (1998), pp. 255–299]. This paper is concerned with the a priori convergence analysis of PWDG in the case of $p$-refinement, that is, the study of the asymptotic behavior of relevant error norms as the number of plane wave directions in the local trial spaces is increased. For convex domains in two space dimensions, we derive convergence rates, employing mesh skeleton-based norms, duality techniques from [P. Monk and D. Wang, Comput. Methods Appl. Mech. Engrg., 175 (1999), pp. 121–136], and plane wave approximation theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we extend to the time-harmonic Maxwell equations the p-version analysis technique developed in [R. Hiptmair, A. Moiola and I. Perugia, Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the p-version, SIAM J. Numer. Anal., 49 (2011), 264-284] for Trefftz-discontinuous Galerkin approximations of the Helmholtz problem. While error estimates in a mesh-skeleton norm are derived parallel to the Helmholtz case, the derivation of estimates in a mesh-independent norm requires new twists in the duality argument. The particular case where the local Trefftz approximation spaces are built of vector-valued plane wave functions is considered, and convergence rates are derived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Brain activity can be measured with several non-invasive neuroimaging modalities, but each modality has inherent limitations with respect to resolution, contrast and interpretability. It is hoped that multimodal integration will address these limitations by using the complementary features of already available data. However, purely statistical integration can prove problematic owing to the disparate signal sources. As an alternative, we propose here an advanced neural population model implemented on an anatomically sound cortical mesh with freely adjustable connectivity, which features proper signal expression through a realistic head model for the electroencephalogram (EEG), as well as a haemodynamic model for functional magnetic resonance imaging based on blood oxygen level dependent contrast (fMRI BOLD). It hence allows simultaneous and realistic predictions of EEG and fMRI BOLD from the same underlying model of neural activity. As proof of principle, we investigate here the influence on simulated brain activity of strengthening visual connectivity. In the future we plan to fit multimodal data with this neural population model. This promises novel, model-based insights into the brain's activity in sleep, rest and task conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the integration of vertical axis wind turbines in the built environment is a promising alternative to horizontal axis wind turbines, a 2D computational investigation of an augmented wind turbine is proposed and analysed. In the initial CFD analysis, three parameters are carefully investigated: mesh resolution; turbulence model; and time step size. It appears that the mesh resolution and the turbulence model affect result accuracy; while the time step size examined, for the unsteady nature of the flow, has small impact on the numerical results. In the CFD validation of the open rotor with secondary data, the numerical results are in good agreement in terms of shape. It is, however, observed a discrepancy factor of 2 between numerical and experimental data. Successively, the introduction of an omnidirectional stator around the wind turbine increases the power and torque coefficients by around 30–35% when compared to the open case; but attention needs to be given to the orientation of the stator blades for optimum performance. It is found that the power and torque coefficients of the augmented wind turbine are independent of the incident wind speed considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3  s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigated the plume structure of a piezo-electric sprayer system, set up to release ethanol in a wind tunnel, using a fast response mini-photoionizaton detector. We recorded the plume structure of four different piezo-sprayer configurations: the sprayer alone; with a 1.6-mm steel mesh shield; with a 3.2-mm steel mesh shield; and with a 5 cm circular upwind baffle. We measured a 12 × 12-mm core at the center of the plume, and both a horizontal and vertical cross-section of the plume, all at 100-, 200-, and 400-mm downwind of the odor source. Significant differences in plume structure were found among all configurations in terms of conditional relative mean concentration, intermittency, ratio of peak concentration to conditional mean concentration, and cross-sectional area of the plume. We then measured the flight responses of the almond moth, Cadra cautella, to odor plumes generated with the sprayer alone, and with the upwind baffle piezo-sprayer configuration, releasing a 13:1 ratio of (9Z,12E)-tetradecadienyl acetate and (Z)-9-tetradecenyl acetate diluted in ethanol at release rates of 1, 10, 100, and 1,000 pg/min. For each configuration, differences in pheromone release rate resulted in significant differences in the proportions of moths performing oriented flight and landing behaviors. Additionally, there were apparent differences in the moths’ behaviors between the two sprayer configurations, although this requires confirmation with further experiments. This study provides evidence that both pheromone concentration and plume structure affect moth orientation behavior and demonstrates that care is needed when setting up experiments that use a piezo-electric release system to ensure the optimal conditions for behavioral observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use the elliptic reconstruction technique in combination with a duality approach to prove a posteriori error estimates for fully discrete backward Euler scheme for linear parabolic equations. As an application, we combine our result with the residual based estimators from the a posteriori estimation for elliptic problems to derive space-error indicators and thus a fully practical version of the estimators bounding the error in the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$ norm. These estimators, which are of optimal order, extend those introduced by Eriksson and Johnson in 1991 by taking into account the error induced by the mesh changes and allowing for a more flexible use of the elliptic estimators. For comparison with previous results we derive also an energy-based a posteriori estimate for the $ \mathrm {L}_{\infty }(0,T;\mathrm {L}_2(\varOmega ))$-error which simplifies a previous one given by Lakkis and Makridakis in 2006. We then compare both estimators (duality vs. energy) in practical situations and draw conclusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to move the nodes in a moving mesh method a time-stepping scheme is required which is ideally explicit and non-tangling (non-overtaking in one dimension (1-D)). Such a scheme is discussed in this paper, together with its drawbacks, and illustrated in 1-D in the context of a velocity-based Lagrangian conservation method applied to first order and second order examples which exhibit a regime change after node compression. An implementation in multidimensions is also described in some detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.