40 resultados para High-Order Accurate Scheme

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CULTURE is an Artificial Life simulation that aims to provide primary school children with opportunities to become actively engaged in the high-order thinking processes of problem solving and critical thinking. A preliminary evaluation of CULTURE has found that it offers the freedom for children to take part in process-oriented learning experiences. Through providing children with opportunities to make inferences, validate results, explain discoveries and analyse situations, CULTURE encourages the development of high-order thinking skills. The evaluation found that CULTURE allows users to autonomously explore the important scientific concepts of life and living, and energy and change within a software environment that children find enjoyable and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitric Oxide (NO) plays a controversial role in the pathophysiology of sepsis and septic shock. Its vasodilatory effects are well known, but it also has pro- and antiinflammatory properties, assumes crucial importance in antimicrobial host defense, may act as an oxidant as well as an antioxidant, and is said to be a vital poison for the immune and inflammatory network. Large amounts of NO and peroxynitrite are responsible for hypotension, vasoplegia, cellular suffocation, apoptosis, lactic acidosis, and ultimately multiorgan failure. Therefore, NO synthase (NOS) inhibitors were developed to reverse the deleterious effects of NO. Studies using these compounds have not met with uniform success however, and a trial using the nonselective NOS inhibitor N-G-methyl-L-arginine hydrochloride was terminated prematurely because of increased mortality in the treatment arm despite improved shock resolution. Thus, the issue of NOS inhibition in sepsis remains a matter of debate. Several publications have emphasized the differences concerning clinical applicability of data obtained from unresuscitated, hypodynamic rodent models using a pretreatment approach versus resuscitated, hyperdynamic models in high-order species using posttreatment approaches. Therefore, the present review focuses on clinically relevant large-animal studies of endotoxin or living bacteria-induced, hyperdynamic models of sepsis that integrate standard day-today care resuscitative measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Results of the benchmark test are presented of comparing numerical schemes solving shock wave of M-s = 2.38 in nitrogen and argon interacting with a 43 degrees semi-apex angle cone and corresponding experiments. The benchmark test was announced in Shock Waves Vol. 12, No. 4, in which we tried to clarify the effects of viscosity and heat conductivity on shock reflection in conical flows. This paper summarizes results of ten numerical and two experimental applications. State of the art in studies regarding the shock/cone interaction is clarified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical solutions of the sediment conservation law are reviewed in terms of their application to bed update schemes in coastal morphological models. It is demonstrated that inadequately formulated numerical techniques lead to the introduction of diffusion, dispersion and the bed elevation oscillations previously reported in the literature. Four different bed update schemes are then reviewed and tested against benchmark analytical solutions. These include a first order upwind scheme, two Lax-Wendroff schemes and a non-oscillating centred scheme (NOCS) recently applied to morphological modelling by Saint-Cast [Saint-Cast, F., 2002. Modelisation de la morphodynamique des corps sableux en milieu littoral (Modelling of coastal sand banks morphodynamics), University Bordeaux 1, Bordeaux, 245 pp.]. It is shown that NOCS limits and controls numerical errors while including all the sediment flux gradients that control morphological change. Further, no post solution filtering is required, which avoids difficulties with selecting filter strength. Finally, NOCS is compared to a recent Lax-Wendroff scheme with post-solution filtering for a longer term simulation of the morphological evolution around a trained river entrance. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results We show that GPNN has high power to detect even relatively small genetic effects (2–3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The identification and characterization of genes that influence the risk of common, complex multifactorial disease primarily through interactions with other genes and environmental factors remains a statistical and computational challenge in genetic epidemiology. We have previously introduced a genetic programming optimized neural network (GPNN) as a method for optimizing the architecture of a neural network to improve the identification of gene combinations associated with disease risk. The goal of this study was to evaluate the power of GPNN for identifying high-order gene-gene interactions. We were also interested in applying GPNN to a real data analysis in Parkinson's disease. Results: We show that GPNN has high power to detect even relatively small genetic effects (2-3% heritability) in simulated data models involving two and three locus interactions. The limits of detection were reached under conditions with very small heritability (

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The elastic net and related algorithms, such as generative topographic mapping, are key methods for discretized dimension-reduction problems. At their heart are priors that specify the expected topological and geometric properties of the maps. However, up to now, only a very small subset of possible priors has been considered. Here we study a much more general family originating from discrete, high-order derivative operators. We show theoretically that the form of the discrete approximation to the derivative used has a crucial influence on the resulting map. Using a new and more powerful iterative elastic net algorithm, we confirm these results empirically, and illustrate how different priors affect the form of simulated ocular dominance columns.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The truncation errors associated with finite difference solutions of the advection-dispersion equation with first-order reaction are formulated from a Taylor analysis. The error expressions are based on a general form of the corresponding difference equation and a temporally and spatially weighted parametric approach is used for differentiating among the various finite difference schemes. The numerical truncation errors are defined using Peclet and Courant numbers and a new Sink/Source dimensionless number. It is shown that all of the finite difference schemes suffer from truncation errors. Tn particular it is shown that the Crank-Nicolson approximation scheme does not have second order accuracy for this case. The effects of these truncation errors on the solution of an advection-dispersion equation with a first order reaction term are demonstrated by comparison with an analytical solution. The results show that these errors are not negligible and that correcting the finite difference scheme for them results in a more accurate solution. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The isotope composition of Ph is difficult to determine accurately due to the lack of a stable normalisation ratio. Double and triple-spike addition techniques provide one solution and presently yield the most accurate measurements. A number of recent studies have claimed that improved accuracy and precision could also be achieved by multi-collector ICP-MS (MC-ICP-MS) Pb-isotope analysis using the addition of Tl of known isotope composition to Pb samples. In this paper, we verify whether the known isotope composition of Tl can be used for correction of mass discrimination of Pb with an extensive dataset for the NIST standard SRM 981, comparison of MC-ICP-MS with TIMS data, and comparison with three isochrons from different geological environments. When all our NIST SRM 981 data are normalised with one constant Tl-205/Tl-203 of 2.38869, the following averages and reproducibilities were obtained: Pb-207/Pb-206=0.91461+/-18; Pb-208/Ph-206 = 2.1674+/-7; and (PbPh)-Pb-206-Ph-204 = 16.941+/-6. These two sigma standard deviations of the mean correspond to 149, 330, and 374 ppm, respectively. Accuracies relative to triple-spike values are 149, 157, and 52 ppm, respectively, and thus well within uncertainties. The largest component of the uncertainties stems from the Ph data alone and is not caused by differential mass discrimination behaviour of Ph and Tl. In routine operation, variation of sample introduction memory and production of isobaric molecular interferences in the spectrometer's collision cell currently appear to be the ultimate limitation to better reproducibility. Comparative study of five different datasets from actual samples (bullets, international rock standards, carbonates, metamorphic minerals, and sulphide minerals) demonstrates that in most cases geological scatter of the sample exceeds the achieved analytical reproducibility. We observe good agreement between TIMS and MC-ICP-MS data for international rock standards but find that such comparison does not constitute the ultimate. test for the validity of the MC-ICP-MS technique. Two attempted isochrons resulted in geological scatter (in one case small) in excess of analytical reproducibility. However, in one case (leached Great Dyke sulphides) we obtained a true isochron (MSWD = 0.63) age of 2578.3 +/- 0.9 Ma, which is identical to and more precise than a recently published U-Pb zircon age (2579 3 Ma) for a Great Dyke websterite [Earth Planet. Sci. Lett. 180 (2000) 1-12]. Reproducibility of this age by means of an isochron we regard as a robust test of accuracy over a wide dynamic range. We show that reliable and accurate Pb-isotope data can be obtained by careful operation of second-generation MC-ICP magnetic sector mass spectrometers. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

New data on the settling velocity of artificial sediments and natural sands at high concentrations are presented. The data are compared with a widely used semiempirical Richardson and Zaki equation (Trans. Inst. Chem. Eng. 32 (1954) 35), which gives an accurate measure of the reduction in velocity as a function of concentration and an experimentally determined empirical power n. Here, a simple method of determining n is presented using standard equations for the clear water settling velocity and the seepage flow within fixed sediment beds. The resulting values for n are compared against values derived from new and existing laboratory data for beach and filter sands. For sands, the appropriate values of n are found to differ significantly from those suggested by Richardson and Zaki for spheres, and are typically larger, corresponding to a greater reduction in settling velocity at high concentrations. For fine and medium sands at concentrations of order 0.4, the hindered settling velocity reduces to about 70% of that expected using values of n derived for spheres. At concentrations of order 0.15, the hindered settling velocity reduces to less than half of the settling velocity in clear water. These reduced settling velocities have important implications for sediment transport modelling close to, and within, sheet flow layers and in the swash zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrete element method (DEM) modeling is used in parallel with a model for coalescence of deformable surface wet granules. This produces a method capable of predicting both collision rates and coalescence efficiencies for use in derivation of an overall coalescence kernel. These coalescence kernels can then be used in computationally efficient meso-scale models such as population balance equation (PBE) models. A soft-sphere DEM model using periodic boundary conditions and a unique boxing scheme was utilized to simulate particle flow inside a high-shear mixer. Analysis of the simulation results provided collision frequency, aggregation frequency, kinetic energy, coalescence efficiency and compaction rates for the granulation process. This information can be used to bridge the gap in multi-scale modeling of granulation processes between the micro-scale DEM/coalescence modeling approach and a meso-scale PBE modeling approach.