151 resultados para Constraint based modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analytical models of IEEE 802.11-based WLANs are invariably based on approximations, such as the well-known mean-field approximations proposed by Bianchi for saturated nodes. In this paper, we provide a new approach for modeling the situation when the nodes are not saturated. We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the CSMA/CA protocol as standardized in the IEEE 802.11 DCF. The approximation is that, when n of the M queues are non-empty, the attempt probability of the n non-empty nodes is given by the long-term attempt probability of n saturated nodes as provided by Bianchi's model. This yields a coupled queue system. When packets arrive to the M queues according to independent Poisson processes, we provide an exact model for the coupled queue system with SDAR service. The main contribution of this paper is to provide an analysis of the coupled queue process by studying a lower dimensional process and by introducing a certain conditional independence approximation. We show that the numerical results obtained from our finite buffer analysis are in excellent agreement with the corresponding results obtained from ns-2 simulations. We replace the CSMA/CA protocol as implemented in the ns-2 simulator with the SDAR service model to show that the SDAR approximation provides an accurate model for the CSMA/CA protocol. We also report the simulation speed-ups thus obtained by our model-based simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time series, from a narrow point of view, is a sequence of observations on a stochastic process made at discrete and equally spaced time intervals. Its future behavior can be predicted by identifying, fitting, and confirming a mathematical model. In this paper, time series analysis is applied to problems concerning runwayinduced vibrations of an aircraft. A simple mathematical model based on this technique is fitted to obtain the impulse response coefficients of an aircraft system considered as a whole for a particular type of operation. Using this model, the output which is the aircraft response can be obtained with lesser computation time for any runway profile as the input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instrumented indentation experiments on a Zr-based bulk metallic glass (BMG) in as-cast, shot-peened and structurally relaxed conditions were conducted to examine the dependence of plastic deformation on its structural state. Results show significant differences in hardness, H, with structural relaxation increasing it and shot peening markedly reducing it, and slightly changed morphology of shear bands around the indents. This is in contrast to uniaxial compressive yield strength, sigma(y), which remains invariant with the change in the structural state of the alloys investigated. The plastic constraint factor, C = H/sigma(y), of the relaxed BMG increases compared with that of the as-cast glass, indicating enhanced pressure sensitivity upon annealing. In contrast, C of the shot-peened layer was found to be similar to that observed in crystalline metals, indicating that severe plastic deformation could eliminate pressure sensitivity. Microscopic origins for this result, in terms of shear transformation zones and free volume, are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The availability of a significant number of the Structures of helical membrane proteins has prompted us to investigate the mode of helix-helix packing. In the present study, we have considered a dataset of alpha-helical membrane proteins representing Structures solved from all the known superfamilies. We have described the geometry of all the helical residues in terms of local coordinate axis at the backbone level. Significant inter-helical interactions have been considered as contacts by weighing the number of atom-atom contacts, including all the side-chain atoms. Such a definition of local axis and the contact criterion has allowed us to investigate the inter-helical interaction in a systematic and quantitative manner. We show that a single parameter (designated as alpha), which is derived from the parameters representing the Mutual orientation of local axes, is able to accurately Capture the details of helix-helix interaction. The analysis has been carried Out by dividing the dataset into parallel, anti-parallel, and perpendicular orientation of helices. The study indicates that a specific range of alpha value is preferred for interactions among the anti-parallel helices. Such a preference is also seen among interacting residues of parallel helices, however to a lesser extent. No such preference is seen in the case of perpendicular helices, the contacts that arise mainly due to the interaction Of Surface helices with the end of the trans-membrane helices. The Study Supports the prevailing view that the anti-parallel helices are well packed. However, the interactions between helices of parallel orientation are non-trivial. The packing in alpha-helical membrane proteins, which is systematically and rigorously investigated in this study, may prove to be useful in modeling of helical membrane proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetorheological dampers are intrinsically nonlinear devices, which make the modeling and design of a suitable control algorithm an interesting and challenging task. To evaluate the potential of magnetorheological (MR) dampers in control applications and to take full advantages of its unique features, a mathematical model to accurately reproduce its dynamic behavior has to be developed and then a proper control strategy has to be taken that is implementable and can fully utilize their capabilities as a semi-active control device. The present paper focuses on both the aspects. First, the paper reports the testing of a magnetorheological damper with an universal testing machine, for a set of frequency, amplitude, and current. A modified Bouc-Wen model considering the amplitude and input current dependence of the damper parameters has been proposed. It has been shown that the damper response can be satisfactorily predicted with this model. Second, a backstepping based nonlinear current monitoring of magnetorheological dampers for semi-active control of structures under earthquakes has been developed. It provides a stable nonlinear magnetorheological damper current monitoring directly based on system feedback such that current change in magnetorheological damper is gradual. Unlike other MR damper control techniques available in literature, the main advantage of the proposed technique lies in its current input prediction directly based on system feedback and smooth update of input current. Furthermore, while developing the proposed semi-active algorithm, the dynamics of the supplied and commanded current to the damper has been considered. The efficiency of the proposed technique has been shown taking a base isolated three story building under a set of seismic excitation. Comparison with widely used clipped-optimal strategy has also been shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering a general linear model of signal degradation, by modeling the probability density function (PDF) of the clean signal using a Gaussian mixture model (GMM) and additive noise by a Gaussian PDF, we derive the minimum mean square error (MMSE) estimator. The derived MMSE estimator is non-linear and the linear MMSE estimator is shown to be a special case. For speech signal corrupted by independent additive noise, by modeling the joint PDF of time-domain speech samples of a speech frame using a GMM, we propose a speech enhancement method based on the derived MMSE estimator. We also show that the same estimator can be used for transform-domain speech enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, a new flame extinction model based on the k/epsilon turbulence time scale concept is proposed to predict the flame liftoff heights over a wide range of coflow temperature and O-2 mass fraction of the coflow. The flame is assumed to be quenched, when the fluid time scale is less than the chemical time scale ( Da < 1). The chemical time scale is derived as a function of temperature, oxidizer mass fraction, fuel dilution, velocity of the jet and fuel type. The present extinction model has been tested for a variety of conditions: ( a) ambient coflow conditions ( 1 atm and 300 K) for propane, methane and hydrogen jet flames, ( b) highly preheated coflow, and ( c) high temperature and low oxidizer concentration coflow. Predicted flame liftoff heights of jet diffusion and partially premixed flames are in excellent agreement with the experimental data for all the simulated conditions and fuels. It is observed that flame stabilization occurs at a point near the stoichiometric mixture fraction surface, where the local flow velocity is equal to the local flame propagation speed. The present method is used to determine the chemical time scale for the conditions existing in the mild/ flameless combustion burners investigated by the authors earlier. This model has successfully predicted the initial premixing of the fuel with combustion products before the combustion reaction initiates. It has been inferred from these numerical simulations that fuel injection is followed by intense premixing with hot combustion products in the primary zone and combustion reaction follows further downstream. Reaction rate contours suggest that reaction takes place over a large volume and the magnitude of the combustion reaction is lower compared to the conventional combustion mode. The appearance of attached flames in the mild combustion burners at low thermal inputs is also predicted, which is due to lower average jet velocity and larger residence times in the near injection zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an algorithm for solid model reconstruction from 2D sectional views based on volume-based approach. None of the existing work in automatic reconstruction from 2D orthographic views have addressed sectional views in detail. It is believed that the volume-based approach is better suited to handle different types of sectional views. The volume-based approach constructs the 3D solid by a boolean combination of elementary solids. The elementary solids are formed by sweep operation on loops identified in the input views. The only adjustment to be made for the presence of sectional views is in the identification of loops that would form the elemental solids. In the algorithm, the conventions of engineering drawing for sectional views, are used to identify the loops correctly. The algorithm is simple and intuitive in nature. Results have been obtained for full sections, offset sections and half sections. Future work will address other types of sectional views such as removed and revolved sections and broken-out sections. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop an alternate characterization of the statistical distribution of the inter-cell interference power observed in the uplink of CDMA systems. We show that the lognormal distribution better matches the cumulative distribution and complementary cumulative distribution functions of the uplink interference than the conventionally assumed Gaussian distribution and variants based on it. This is in spite of the fact that many users together contribute to uplink interference, with the number of users and their locations both being random. Our observations hold even in the presence of power control and cell selection, which have hitherto been used to justify the Gaussian distribution approximation. The parameters of the lognormal are obtained by matching moments, for which detailed analytical expressions that incorporate wireless propagation, cellular layout, power control, and cell selection parameters are developed. The moment-matched lognormal model, while not perfect, is an order of magnitude better in modeling the interference power distribution.