214 resultados para Scaled Models.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies an ultrasonic wave dispersion characteristics of a nanorod. Nonlocal strain gradient models (both second and fourth order) are introduced to analyze the ultrasonic wave behavior in nanorod. Explicit expressions are derived for wave numbers and the wave speeds of the nanorod. The analysis shows that the fourth order strain gradient model gives approximate results over the second order strain gradient model for dynamic analysis. The second order strain gradient model gives a critical wave number at certain wave frequency, where the wave speeds are zero. A relation among the number of waves along the nanorod, the nonlocal scaling parameter (e(0)a), and the length of the nanorod is obtained from the nonlocal second order strain gradient model. The ultrasonic wave characteristics of the nanorod obtained from the nonlocal strain gradient models are compared with the classical continuum model. The dynamic response behavior of nanorods is explained from both the strain gradient models. The effect of e(0)a on the ultrasonic wave behavior of the nanorods is also observed. (C) 2010 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the stability of hadron resonance gas models by extending them to include undiscovered resonances through the Hagedorn formula. We find that the influence of unknown resonances on thermodynamics is large but bounded. We model the decays of resonances and investigate the ratios of particle yields in heavy-ion collisions. We find that observables such as hydrodynamics and hadron yield ratios change little upon extending the model. As a result, heavy-ion collisions at the RHIC and LHC are insensitive to a possible exponential rise in the hadronic density of states, thus increasing the stability of the predictions of hadron resonance gas models in this context. Hadron resonance gases are internally consistent up to a temperature higher than the crossover temperature in QCD, but by examining quark number susceptibilities we find that their region of applicability ends below the QCD crossover.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-Gaussianity of signals/noise often results in significant performance degradation for systems, which are designed using the Gaussian assumption. So non-Gaussian signals/noise require a different modelling and processing approach. In this paper, we discuss a new Bayesian estimation technique for non-Gaussian signals corrupted by colored non Gaussian noise. The method is based on using zero mean finite Gaussian Mixture Models (GMMs) for signal and noise. The estimation is done using an adaptive non-causal nonlinear filtering technique. The method involves deriving an estimator in terms of the GMM parameters, which are in turn estimated using the EM algorithm. The proposed filter is of finite length and offers computational feasibility. The simulations show that the proposed method gives a significant improvement compared to the linear filter for a wide variety of noise conditions, including impulsive noise. We also claim that the estimation of signal using the correlation with past and future samples leads to reduced mean squared error as compared to signal estimation based on past samples only.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The momentum balance of the linear-combination integral model for the transition zone is investigated for constant pressure flows. The imbalance is found to be small enough to be negligible for all practical purposes. [S0889-504X(00)00703-0].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews computational reliability, computer algebra, stochastic stability and rotating frame turbulence (RFT) in the context of predicting the blade inplane mode stability, a mode which is at best weakly damped. Computational reliability can be built into routine Floquet analysis involving trim analysis and eigenanalysis, and a highly portable special purpose processor restricted to rotorcraft dynamics analysis is found to be more economical than a multipurpose processor. While the RFT effects are dominant in turbulence modeling, the finding that turbulence stabilizes the inplane mode is based on the assumption that turbulence is white noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A conceptually unifying and flexible approach to the ABC and FGH segments of the nortriterpenoid rubrifloradilactone C, each embodying a furo[3,2-b]furanone moiety, from the appropriate Morita-Baylis-Hillman adducts is delineated. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regional impacts of climate change remain subject to large uncertainties accumulating from various sources, including those due to choice of general circulation models (GCMs), scenarios, and downscaling methods. Objective constraints to reduce the uncertainty in regional predictions have proven elusive. In most studies to date the nature of the downscaling relationship (DSR) used for such regional predictions has been assumed to remain unchanged in a future climate. However,studies have shown that climate change may manifest in terms of changes in frequencies of occurrence of the leading modes of variability, and hence, stationarity of DSRs is not really a valid assumption in regional climate impact assessment. This work presents an uncertainty modeling framework where, in addition to GCM and scenario uncertainty, uncertainty in the nature of the DSR is explored by linking downscaling with changes in frequencies of such modes of natural variability. Future projections of the regional hydrologic variable obtained by training a conditional random field (CRF) model on each natural cluster are combined using the weighted Dempster-Shafer (D-S) theory of evidence combination. Each projection is weighted with the future projected frequency of occurrence of that cluster (''cluster linking'') and scaled by the GCM performance with respect to the associated cluster for the present period (''frequency scaling''). The D-S theory was chosen for its ability to express beliefs in some hypotheses, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The methodology is tested for predicting monsoon streamflow of the Mahanadi River at Hirakud Reservoir in Orissa, India. The results show an increasing probability of extreme, severe, and moderate droughts due to limate change. Significantly improved agreement between GCM predictions owing to cluster linking and frequency scaling is seen, suggesting that by linking regional impacts to natural regime frequencies, uncertainty in regional predictions can be realistically quantified. Additionally, by using a measure of GCM performance in simulating natural regimes, this uncertainty can be effectively constrained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deterministic models have been widely used to predict water quality in distribution systems, but their calibration requires extensive and accurate data sets for numerous parameters. In this study, alternative data-driven modeling approaches based on artificial neural networks (ANNs) were used to predict temporal variations of two important characteristics of water quality chlorine residual and biomass concentrations. The authors considered three types of ANN algorithms. Of these, the Levenberg-Marquardt algorithm provided the best results in predicting residual chlorine and biomass with error-free and ``noisy'' data. The ANN models developed here can generate water quality scenarios of piped systems in real time to help utilities determine weak points of low chlorine residual and high biomass concentration and select optimum remedial strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polarizabilities and Hyperpolarizabilities of conjugated organic chains are calculated using correlated model Hamiltonians. While correlations reduce the Polarizabilities and extend the range of linear response, the Hyperpolarizabilities essentially are unaffected by the same. This explains the apparently large Hyperpolarizabilities of conjugated electronic systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Randomly diluted quantum boson and spin models in two dimensions combine the physics of classical percolation with the well-known dimensionality dependence of ordering in quantum lattice models. This combination is rather subtle for models that order in two dimensions but have no true order in one dimension, as the percolation cluster near threshold is a fractal of dimension between 1 and 2: two experimentally relevant examples are the O(2) quantum rotor and the Heisenberg antiferromagnet. We study two analytic descriptions of the O(2) quantum rotor near the percolation threshold. First a spin-wave expansion is shown to predict long-ranged order, but there are statistically rare points on the cluster that violate the standard assumptions of spin-wave theory. A real-space renormalization group (RSRG) approach is then used to understand how these rare points modify ordering of the O(2) rotor. A new class of fixed points of the RSRG equations for disordered one-dimensional bosons is identified and shown to support the existence of long-range order on the percolation backbone in two dimensions. These results are relevant to experiments on bosons in optical lattices and superconducting arrays, and also (qualitatively) for the diluted Heisenberg antiferromagnet La-2(Zn,Mg)(x)Cu1-xO4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.