956 resultados para finite difference methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method for the introduction of strong discontinuities into a mesh will be developed. This method, applicable to a number of eXtended Finite Element Methods (XFEM) with intra-element strong discontinuities will be demonstrated with one specific method: the Generalized Cohesive Element (GCE) method. The algorithm utilizes a subgraph mesh representation which may insert the GCE either adaptively during the course of the analysis or a priori. Using this subgraphing algorithm, the insertion time is O(n) to the number of insertions. Numerical examples are presented demonstrating the advantages of the subgraph insertion method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heat transfer is considered as one of the most critical issues for design and implement of large-scale microwave heating systems, in which improvement of the microwave absorption of materials and suppression of uneven temperature distribution are the two main objectives. The present work focuses on the analysis of heat transfer in microwave heating for achieving highly efficient microwave assisted steelmaking through the investigations on the following aspects: (1) characterization of microwave dissipation using the derived equations, (2) quantification of magnetic loss, (3) determination of microwave absorption properties of materials, (4) modeling of microwave propagation, (5) simulation of heat transfer, and (6) improvement of microwave absorption and heating uniformity. Microwave heating is attributed to the heat generation in materials, which depends on the microwave dissipation. To theoretically characterize microwave heating, simplified equations for determining the transverse electromagnetic mode (TEM) power penetration depth, microwave field attenuation length, and half-power depth of microwaves in materials having both magnetic and dielectric responses were derived. It was followed by developing a simplified equation for quantifying magnetic loss in materials under microwave irradiation to demonstrate the importance of magnetic loss in microwave heating. The permittivity and permeability measurements of various materials, namely, hematite, magnetite concentrate, wüstite, and coal were performed. Microwave loss calculations for these materials were carried out. It is suggested that magnetic loss can play a major role in the heating of magnetic dielectrics. Microwave propagation in various media was predicted using the finite-difference time-domain method. For lossy magnetic dielectrics, the dissipation of microwaves in the medium is ascribed to the decay of both electric and magnetic fields. The heat transfer process in microwave heating of magnetite, which is a typical magnetic dielectric, was simulated by using an explicit finite-difference approach. It is demonstrated that the heat generation due to microwave irradiation dominates the initial temperature rise in the heating and the heat radiation heavily affects the temperature distribution, giving rise to a hot spot in the predicted temperature profile. Microwave heating at 915 MHz exhibits better heating homogeneity than that at 2450 MHz due to larger microwave penetration depth. To minimize/avoid temperature nonuniformity during microwave heating the optimization of object dimension should be considered. The calculated reflection loss over the temperature range of heating is found to be useful for obtaining a rapid optimization of absorber dimension, which increases microwave absorption and achieves relatively uniform heating. To further improve the heating effectiveness, a function for evaluating absorber impedance matching in microwave heating was proposed. It is found that the maximum absorption is associated with perfect impedance matching, which can be achieved by either selecting a reasonable sample dimension or modifying the microwave parameters of the sample.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The maximum principle is an important property of solutions to PDE. Correspondingly, it's of great interest for people to design a high order numerical scheme solving PDE with this property maintained. In this thesis, our particular interest is solving convection-dominated diffusion equation. We first review a nonconventional maximum principle preserving(MPP) high order finite volume(FV) WENO scheme, and then propose a new parametrized MPP high order finite difference(FD) WENO framework, which is generalized from the one solving hyperbolic conservation laws. A formal analysis is presented to show that a third order finite difference scheme with this parametrized MPP flux limiters maintains the third order accuracy without extra CFL constraint when the low order monotone flux is chosen appropriately. Numerical tests in both one and two dimensional cases are performed on the simulation of the incompressible Navier-Stokes equations in vorticity stream-function formulation and several other problems to show the effectiveness of the proposed method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biogeochemical processes in the coastal region, including the coastal area of the Great Lakes, are of great importance due to the complex physical, chemical and biological characteristics that differ from those on either the adjoining land or open water systems. Particle-reactive radioisotopes, both naturally occurring (210Pb, 210Po and 7Be) and man-made (137Cs), have proven to be useful tracers for these processes in many systems. However, a systematic isotope study on the northwest coast of the Keweenaw Peninsula in Lake Superior has not yet been performed. In this dissertation research, field sampling, laboratory measurements and numerical modeling were conducted to understand the biogeochemistry of the radioisotope tracers and some particulate-related coastal processes. In the first part of the dissertation, radioisotope activities of 210Po and 210Pb in a variability of samples (dissolved, suspended particle, sediment trap materials, surficial sediment) were measured. A completed picture of the distribution and disequilibrium of this pair of isotopes was drawn. The application of a simple box model utilizing these field observations reveals short isotope residence times in the water column and a significant contribution of sediment resuspension (for both particles and isotopes). The results imply a highly dynamic coastal region. In the second part of this dissertation, this conclusion is examined further. Based on intensive sediment coring, the spatial distribution of isotope inventories (mainly 210Pb, 137Cs and 7Be) in the nearshore region was determined. Isotope-based focusing factors categorized most of the sampling sites as non- or temporary depositional zones. A twodimensional steady-state box-in-series model was developed and applied to individual transects with the 210Pb inventories as model input. The modeling framework included both water column and upper sediments down to the depth of unsupported 210Pb penetration. The model was used to predict isotope residence times and cross-margin fluxes of sediments and isotopes at different locations along each transect. The time scale for sediment focusing from the nearshore to offshore regions of the transect was on the order of 10 years. The possibility of sediment longshore movement was indicated by high inventory ratios of 137Cs: 210Pb. Local deposition of fine particles, including fresh organic carbon, may explain the observed distribution of benthic organisms such as Diporeia. In the last part of this dissertation, isotope tracers, 210Pb and 210Po, were coupled into a hydrodynamic model for Lake Superior. The model was modified from an existing 2-D finite difference physical-biological model which has previously been successfully applied on Lake Superior. Using the field results from part one of this dissertation as initial conditions, the model was used to predict the isotope distribution in the water column; reasonable results were achieved. The modeling experiments demonstrated the potential for using a hydrodynamic model to study radioisotope biogeochemistry in the lake, although further refinements are necessary.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of this paper is to establish exponential convergence of $hp$-version interior penalty (IP) discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems with homogeneous Dirichlet boundary conditions and piecewise analytic data in three-dimensional polyhedral domains. More precisely, we shall analyze the convergence of the $hp$-IP dG methods considered in [D. Schötzau, C. Schwab, T. P. Wihler, SIAM J. Numer. Anal., 51 (2013), pp. 1610--1633] based on axiparallel $\sigma$-geometric anisotropic meshes and $\bm{s}$-linear anisotropic polynomial degree distributions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Morbidity and mortality in T1DM depend on metabolic control, which is assessed by HbA1c measurements every 3-4 months. Patients' self-perception of glycemic control depends on daily blood glucose monitoring. Little is known about the congruence of patients' and professionals' perception of metabolic control in T1DM. OBJECTIVE To assess the actual patients' self-perception and objective assessment (HbA1c) of metabolic control in T1DM children and adolescents and to investigate the possible factors involved in any difference. METHODS Patients with T1DM aged 8 - 18 years were recruited in a cross-sectional, retrospective and prospective cohort study. Data collection consisted of clinical details, measured HbA1c, self-monitored blood glucose values and questionnaires assessing self and professionals' judgment of metabolic control. RESULTS 91 patients participated. Mean HbA1c was 8.03%. HbA1c was higher in patients with a diabetes duration > 2 years (p = 0.025) and in patients of lower socioeconomic level (p = 0.032). No significant correlation was found for self-perception of metabolic control in well and poorly controlled patients. We found a trend towards false-positive memory of the last HbA1c in patients with a HbA1c > 8.5% (p = 0.069) but no difference in patients' knowledge on target HbA1c between well and poorly controlled patients. CONCLUSIONS T1DM patients are aware of a target HbA1c representing good metabolic control. Ill controlled patients appear to have a poorer recollection of their HbA1c. Self-perception of actual metabolic control is similar in well and poorly controlled T1DM children and adolescents. Therefore, professionals should pay special attention that ill controlled T1DM patients perceive their HbA1c correctly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contraction, strike slip, and extension displacements along the Hikurangi margin northeast of the North Island of New Zealand coincide with large lateral gradients in material properties. We use a finite- difference code utilizing elastic and elastic-plastic rheologies to build large- scale, three-dimensional numerical models which investigate the influence of material properties on velocity partitioning within oblique subduction zones. Rheological variation in the oblique models is constrained by seismic velocity and attenuation information available for the Hikurangi margin. We compare the effect of weakly versus strongly coupled subduction interfaces on the development of extension and the partitioning of velocity components for orthogonal and oblique convergence and include the effect of ponded sediments beneath the Raukumara Peninsula. Extension and velocity partitioning occur if the subduction interface is weak, but neither develops if the subduction interface is strong. The simple mechanical model incorporating rheological variation based on seismic observations produces kinematics that closely match those published from the Hikurangi margin. These include extension within the Taupo Volcanic Zone, uplift over ponded sediments, and dextral contraction to the south.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Patients suffering from cystic fibrosis (CF) show thick secretions, mucus plugging and bronchiectasis in bronchial and alveolar ducts. This results in substantial structural changes of the airway morphology and heterogeneous ventilation. Disease progression and treatment effects are monitored by so-called gas washout tests, where the change in concentration of an inert gas is measured over a single or multiple breaths. The result of the tests based on the profile of the measured concentration is a marker for the severity of the ventilation inhomogeneity strongly affected by the airway morphology. However, it is hard to localize underlying obstructions to specific parts of the airways, especially if occurring in the lung periphery. In order to support the analysis of lung function tests (e.g. multi-breath washout), we developed a numerical model of the entire airway tree, coupling a lumped parameter model for the lung ventilation with a 4th-order accurate finite difference model of a 1D advection-diffusion equation for the transport of an inert gas. The boundary conditions for the flow problem comprise the pressure and flow profile at the mouth, which is typically known from clinical washout tests. The natural asymmetry of the lung morphology is approximated by a generic, fractal, asymmetric branching scheme which we applied for the conducting airways. A conducting airway ends when its dimension falls below a predefined limit. A model acinus is then connected to each terminal airway. The morphology of an acinus unit comprises a network of expandable cells. A regional, linear constitutive law describes the pressure-volume relation between the pleural gap and the acinus. The cyclic expansion (breathing) of each acinus unit depends on the resistance of the feeding airway and on the flow resistance and stiffness of the cells themselves. Special care was taken in the development of a conservative numerical scheme for the gas transport across bifurcations, handling spatially and temporally varying advective and diffusive fluxes over a wide range of scales. Implicit time integration was applied to account for the numerical stiffness resulting from the discretized transport equation. Local or regional modification of the airway dimension, resistance or tissue stiffness are introduced to mimic pathological airway restrictions typical for CF. This leads to a more heterogeneous ventilation of the model lung. As a result the concentration in some distal parts of the lung model remains increased for a longer duration. The inert gas concentration at the mouth towards the end of the expirations is composed of gas from regions with very different washout efficiency. This results in a steeper slope of the corresponding part of the washout profile.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Meta-analyses of continuous outcomes typically provide enough information for decision-makers to evaluate the extent to which chance can explain apparent differences between interventions. The interpretation of the magnitude of these differences - from trivial to large - can, however, be challenging. We investigated clinicians' understanding and perceptions of usefulness of 6 statistical formats for presenting continuous outcomes from meta-analyses (standardized mean difference, minimal important difference units, mean difference in natural units, ratio of means, relative risk and risk difference). METHODS We invited 610 staff and trainees in internal medicine and family medicine programs in 8 countries to participate. Paper-based, self-administered questionnaires presented summary estimates of hypothetical interventions versus placebo for chronic pain. The estimates showed either a small or a large effect for each of the 6 statistical formats for presenting continuous outcomes. Questions addressed participants' understanding of the magnitude of treatment effects and their perception of the usefulness of the presentation format. We randomly assigned participants 1 of 4 versions of the questionnaire, each with a different effect size (large or small) and presentation order for the 6 formats (1 to 6, or 6 to 1). RESULTS Overall, 531 (87.0%) of the clinicians responded. Respondents best understood risk difference, followed by relative risk and ratio of means. Similarly, they perceived the dichotomous presentation of continuous outcomes (relative risk and risk difference) to be most useful. Presenting results as a standardized mean difference, the longest standing and most widely used approach, was poorly understood and perceived as least useful. INTERPRETATION None of the presentation formats were well understood or perceived as extremely useful. Clinicians best understood the dichotomous presentations of continuous outcomes and perceived them to be the most useful. Further initiatives to help clinicians better grasp the magnitude of the treatment effect are needed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have recently demonstrated a biosensor based on a lattice of SU8 pillars on a 1 μm SiO2/Si wafer by measuring vertically reflectivity as a function of wavelength. The biodetection has been proven with the combination of Bovine Serum Albumin (BSA) protein and its antibody (antiBSA). A BSA layer is attached to the pillars; the biorecognition of antiBSA involves a shift in the reflectivity curve, related with the concentration of antiBSA. A detection limit in the order of 2 ng/ml is achieved for a rhombic lattice of pillars with a lattice parameter (a) of 800 nm, a height (h) of 420 nm and a diameter(d) of 200 nm. These results correlate with calculations using 3D-finite difference time domain method. A 2D simplified model is proposed, consisting of a multilayer model where the pillars are turned into a 420 nm layer with an effective refractive index obtained by using Beam Propagation Method (BPM) algorithm. Results provided by this model are in good correlation with experimental data, reaching a reduction in time from one day to 15 minutes, giving a fast but accurate tool to optimize the design and maximizing sensitivity, and allows analyzing the influence of different variables (diameter, height and lattice parameter). Sensitivity is obtained for a variety of configurations, reaching a limit of detection under 1 ng/ml. Optimum design is not only chosen because of its sensitivity but also its feasibility, both from fabrication (limited by aspect ratio and proximity of the pillars) and fluidic point of view. (© 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper was to accurately estimate the local truncation error of partial differential equations, that are numerically solved using a finite difference or finite volume approach on structured and unstructured meshes. In this work, we approximated the local truncation error using the @t-estimation procedure, which aims to compare the residuals on a sequence of grids with different spacing. First, we focused the analysis on one-dimensional scalar linear and non-linear test cases to examine the accuracy of the estimation of the truncation error for both finite difference and finite volume approaches on different grid topologies. Then, we extended the analysis to two-dimensional problems: first on linear and non-linear scalar equations and finally on the Euler equations. We demonstrated that this approach yields a highly accurate estimation of the truncation error if some conditions are fulfilled. These conditions are related to the accuracy of the restriction operators, the choice of the boundary conditions, the distortion of the grids and the magnitude of the iteration error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Global linear instability theory is concerned with the temporal or spatial development of small-amplitude perturbations superposed upon laminar steady or time-periodic threedimensional flows, which are inhomogeneous in two (and periodic in one) or all three spatial directions.1 The theory addresses flows developing in complex geometries, in which the parallel or weakly nonparallel basic flow approximation invoked by classic linear stability theory does not hold. As such, global linear theory is called to fill the gap in research into stability and transition in flows over or through complex geometries. Historically, global linear instability has been (and still is) concerned with solution of multi-dimensional eigenvalue problems; the maturing of non-modal linear instability ideas in simple parallel flows during the last decade of last century2–4 has given rise to investigation of transient growth scenarios in an ever increasing variety of complex flows. After a brief exposition of the theory, connections are sought with established approaches for structure identification in flows, such as the proper orthogonal decomposition and topology theory in the laminar regime and the open areas for future research, mainly concerning turbulent and three-dimensional flows, are highlighted. Recent results obtained in our group are reported in both the time-stepping and the matrix-forming approaches to global linear theory. In the first context, progress has been made in implementing a Jacobian-Free Newton Krylov method into a standard finite-volume aerodynamic code, such that global linear instability results may now be obtained in compressible flows of aeronautical interest. In the second context a new stable very high-order finite difference method is implemented for the spatial discretization of the operators describing the spatial BiGlobal EVP, PSE-3D and the TriGlobal EVP; combined with sparse matrix treatment, all these problems may now be solved on standard desktop computers.