900 resultados para Extended Duhamel Principle


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a non-local version of the NJL model, based on a separable quark-quark interaction. The interaction is extended to include terms that bind vector and axial-vector mesons. The non-locality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Working in the ladder approximation, we calculate amplitudes in Euclidean space and discuss features of their continuation to Minkowski energies. Conserved currents are constructed and we demonstrate their consistency with various Ward identities. Various meson masses are calculated, along with their strong and electromagnetic decay amplitudes. We also calculate the electromagnetic form factor of the pion, as well as form factors associated with the processes γγ* → π0 and ω → π0γ*. The results are found to lead to a satisfactory phenomenology and lend some dynamical support to the idea of vector-meson dominance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nanostructure of a peptide amphiphile in commercial use in anti-wrinkle creams is investigated. The peptide contains a matrikine, collagen-stimulating, pentapeptide sequence. Selfassembly into giant nanotapes is observed and the internal structure was found to comprise bilayers parallel to the flat tape surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nonlocal version of the NJL model is investigated. It is based on a separable quark-quark interaction, as suggested by the instanton liquid picture of the QCD vacuum. The interaction is extended to include terms that bind vector and axial-vector mesons. The nonlocality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Features of the continuation of amplitudes from Euclidean space to Minkowski energies are discussed. These features lead to restrictions on the model parameters as well as on the range of applicability of the model. Conserved currents are constructed, and their consistency with various Ward identities is demonstrated. In particular, the Gell-Mann-Oakes-Renner relation is derived both in the ladder approximation and at meson loop level. The importance of maintaining chiral symmetry in the calculations is stressed throughout. Calculations with the model are performed to all orders in momentum. Meson masses are determined, along with their strong and electromagnetic decay amplitudes. Also calculated are the electromagnetic form factor of the pion and form factors associated with the processes gamma gamma* --> pi0 and omega --> pi0 gamma*. The results are found to lead to a satisfactory phenomenology and demonstrate a possible dynamical origin for vector-meson dominance. In addition, the results produced at meson loop level validate the use of 1/Nc as an expansion parameter and indicate that a light and broad scalar state is inherent in models of the NJL type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The climatology of a stratosphere-resolving version of the Met Office’s climate model is studied and validated against ECMWF reanalysis data. Ensemble integrations are carried out at two different horizontal resolutions. Along with a realistic climatology and annual cycle in zonal mean zonal wind and temperature, several physical effects are noted in the model. The time of final warming of the winter polar vortex is found to descend monotonically in the Southern Hemisphere, as would be expected for purely radiative forcing. In the Northern Hemisphere, however, the time of final warming is driven largely by dynamical effects in the lower stratosphere and radiative effects in the upper stratosphere, leading to the earliest transition to westward winds being seen in the midstratosphere. A realistic annual cycle in stratospheric water vapor concentrations—the tropical “tape recorder”—is captured. Tropical variability in the zonal mean zonal wind is found to be in better agreement with the reanalysis for the model run at higher horizontal resolution because the simulated quasi-biennial oscillation has a more realistic amplitude. Unexpectedly, variability in the extratropics becomes less realistic under increased resolution because of reduced resolved wave drag and increased orographic gravity wave drag. Overall, the differences in climatology between the simulations at high and moderate horizontal resolution are found to be small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continually increasing demands for improvements to atmospheric and planetary remote-sensing instrumentation, for both high optical system performance and extended operational lifetimes, an investigation to access the effects of prolonged exposure of the space environment to a series of infrared interference filters and optical materials was promoted on the NASA LDEF mission. The NASA Long Duration Exposure Facility (LDEF) was launchd by the Space Shuttle to transport various science and technology experiments both to and from space, providing investigators with the opportunity to study the effects of the space environment on materials and systems used in space-flight applications. Preliminary results to be discussed consist of transmission measurements obtained and processed from an infrared spectrophotometer both before (1983) and after (1990) exposure compared with unexposed control specimens, together with results of detailed microscopic and general visual examinations performed on the experiment. The principle lead telluride (PbTe) and Zinc Sulphide (ZnS) based multilayer filters selected for this preliminary investigation consist of : an 8-12µm low pass edge filter, a 10.6µm 2.5% half bandwidth (HBW) double half-wave narrow bandpass filter, and a 10% HBW triple half-wave wide bandpass filter at 15µm. Optical substrates of MgF2 and KRS-5 (T1BrI) will also be discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pollen-mediated gene flow is one of the main concerns associated with the introduction of genetically modified (GM) crops. Should a premium for non-GM varieties emerge on the market, ‘contamination’ by GM pollen would generate a revenue loss for growers of non-GM varieties. This paper analyses the problem of pollen-mediated gene flow as a particular type of production externality. The model, although simple, provides useful insights into coexistence policies. Following on from this and taking GM herbicide-tolerant oilseed rape (Brassica napus) as a model crop, a Monte Carlo simulation is used to generate data and then estimate the effect of several important policy variables (including width of buffer zones and spatial aggregation) on the magnitude of the externality associated with pollen-mediated gene flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a method for the state estimation of nonlinear systems described by a class of differential-algebraic equation models using the extended Kalman filter. The method involves the use of a time-varying linearisation of a semi-explicit index one differential-algebraic equation. The estimation technique consists of a simplified extended Kalman filter that is integrated with the differential-algebraic equation model. The paper describes a simulation study using a model of a batch chemical reactor. It also reports a study based on experimental data obtained from a mixing process, where the model of the system is solved using the sequential modular method and the estimation involves a bank of extended Kalman filters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pluronic F127 diacrylate (F127DA) is a bifunctional acrylate and as such it should in principle produce macroscopically cross-linked materials; however, its photopolymerization in water does not lead to 3D-extended hydrogels. The main species present after photopolymerization appear to be cross-linked micelles, which indicates that the micellar morphology of F127DA has a template effect on the polymerization. The structural analogy causes the physical state of precursor and polymerized materials to be very similar for a wide range of concentrations (5–25% wt) and temperatures (10–37 °C). Also the long-range morphology of F127DA appears to have a template effect: samples photopolymerized in a micellar gel state and redispersed at high concentration (25% wt) show a long-range organization that depended on the concentration and therefore on the order of the precursor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Assessment of the risk to human health posed by contaminated land may be seriously overestimated if reliant on total pollutant concentration. In vitro extraction tests, such as the physiologically based extraction test (PBET), imitate the physicochemical conditions of the human gastro-intestinal tract and offer a more practicable alternative for routine testing purposes. However, even though passage through the colon accounts for approximately 80% of the transit time through the human digestive tract and the typical contents of the colon in vivo are a carbohydrate-rich aqueous medium with the potential to promote desorption of organic pollutants, PBET comprises stomach and small intestine compartments only. Through addition of an eight-hour colon compartment to PBET and use of a carbohydrate-rich fed-state medium we demonstrated that colon-extended PBET (CE-PBET) in- creased assessments of soil-bound PAH bioaccessibility by up to 50% in laboratory soils and a factor of 4 in field soils. We attribute this increased bioaccessibility to a combination of the additional extraction time and the presence of carbohydrates in the colon compartment, both of which favor PAH desorption from soil. We propose that future assessments of the bioaccessibility of organic pollutants in soils using physiologically based extraction tests should have a colon compartment as in CE-PBET.