71 resultados para Individual-based modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time series, from a narrow point of view, is a sequence of observations on a stochastic process made at discrete and equally spaced time intervals. Its future behavior can be predicted by identifying, fitting, and confirming a mathematical model. In this paper, time series analysis is applied to problems concerning runwayinduced vibrations of an aircraft. A simple mathematical model based on this technique is fitted to obtain the impulse response coefficients of an aircraft system considered as a whole for a particular type of operation. Using this model, the output which is the aircraft response can be obtained with lesser computation time for any runway profile as the input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and efficient method for spontaneous organization of long assemblies of gold nanoparticles is described. This is achieved in a molten solvent containing acetamide, urea and ammonium nitrate that acts as a solvent cum stabilizer. There is no external aggregating agent or stabilizing agent added to the system. Depending on the concentration of the metal salt in the ternary melt, either chain-like assemblies or individual nanoparticles could be obtained. The amine groups present in the components of the melt (acetamide and urea) help in the stabilization of nanoparticles. Ammonium ions present in the eutectic mixture are likely to assist in the organization of the particles. The method is simple, highly reproducible and does not require any templating agent for the formation of chain-like assemblies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of three-phase sinusoidal pulse-width-modulated inverter control strategy using microprocessor is discussed in this paper. To save CPU time, the DMA technique is used for transferring the switching pattern from memory to the pulse amplifier and isolation circuits of individual thyristors in the inverter bridge. The method of controlling both voltage and frequency is discussed here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The availability of a significant number of the Structures of helical membrane proteins has prompted us to investigate the mode of helix-helix packing. In the present study, we have considered a dataset of alpha-helical membrane proteins representing Structures solved from all the known superfamilies. We have described the geometry of all the helical residues in terms of local coordinate axis at the backbone level. Significant inter-helical interactions have been considered as contacts by weighing the number of atom-atom contacts, including all the side-chain atoms. Such a definition of local axis and the contact criterion has allowed us to investigate the inter-helical interaction in a systematic and quantitative manner. We show that a single parameter (designated as alpha), which is derived from the parameters representing the Mutual orientation of local axes, is able to accurately Capture the details of helix-helix interaction. The analysis has been carried Out by dividing the dataset into parallel, anti-parallel, and perpendicular orientation of helices. The study indicates that a specific range of alpha value is preferred for interactions among the anti-parallel helices. Such a preference is also seen among interacting residues of parallel helices, however to a lesser extent. No such preference is seen in the case of perpendicular helices, the contacts that arise mainly due to the interaction Of Surface helices with the end of the trans-membrane helices. The Study Supports the prevailing view that the anti-parallel helices are well packed. However, the interactions between helices of parallel orientation are non-trivial. The packing in alpha-helical membrane proteins, which is systematically and rigorously investigated in this study, may prove to be useful in modeling of helical membrane proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetorheological dampers are intrinsically nonlinear devices, which make the modeling and design of a suitable control algorithm an interesting and challenging task. To evaluate the potential of magnetorheological (MR) dampers in control applications and to take full advantages of its unique features, a mathematical model to accurately reproduce its dynamic behavior has to be developed and then a proper control strategy has to be taken that is implementable and can fully utilize their capabilities as a semi-active control device. The present paper focuses on both the aspects. First, the paper reports the testing of a magnetorheological damper with an universal testing machine, for a set of frequency, amplitude, and current. A modified Bouc-Wen model considering the amplitude and input current dependence of the damper parameters has been proposed. It has been shown that the damper response can be satisfactorily predicted with this model. Second, a backstepping based nonlinear current monitoring of magnetorheological dampers for semi-active control of structures under earthquakes has been developed. It provides a stable nonlinear magnetorheological damper current monitoring directly based on system feedback such that current change in magnetorheological damper is gradual. Unlike other MR damper control techniques available in literature, the main advantage of the proposed technique lies in its current input prediction directly based on system feedback and smooth update of input current. Furthermore, while developing the proposed semi-active algorithm, the dynamics of the supplied and commanded current to the damper has been considered. The efficiency of the proposed technique has been shown taking a base isolated three story building under a set of seismic excitation. Comparison with widely used clipped-optimal strategy has also been shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering a general linear model of signal degradation, by modeling the probability density function (PDF) of the clean signal using a Gaussian mixture model (GMM) and additive noise by a Gaussian PDF, we derive the minimum mean square error (MMSE) estimator. The derived MMSE estimator is non-linear and the linear MMSE estimator is shown to be a special case. For speech signal corrupted by independent additive noise, by modeling the joint PDF of time-domain speech samples of a speech frame using a GMM, we propose a speech enhancement method based on the derived MMSE estimator. We also show that the same estimator can be used for transform-domain speech enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, a new flame extinction model based on the k/epsilon turbulence time scale concept is proposed to predict the flame liftoff heights over a wide range of coflow temperature and O-2 mass fraction of the coflow. The flame is assumed to be quenched, when the fluid time scale is less than the chemical time scale ( Da < 1). The chemical time scale is derived as a function of temperature, oxidizer mass fraction, fuel dilution, velocity of the jet and fuel type. The present extinction model has been tested for a variety of conditions: ( a) ambient coflow conditions ( 1 atm and 300 K) for propane, methane and hydrogen jet flames, ( b) highly preheated coflow, and ( c) high temperature and low oxidizer concentration coflow. Predicted flame liftoff heights of jet diffusion and partially premixed flames are in excellent agreement with the experimental data for all the simulated conditions and fuels. It is observed that flame stabilization occurs at a point near the stoichiometric mixture fraction surface, where the local flow velocity is equal to the local flame propagation speed. The present method is used to determine the chemical time scale for the conditions existing in the mild/ flameless combustion burners investigated by the authors earlier. This model has successfully predicted the initial premixing of the fuel with combustion products before the combustion reaction initiates. It has been inferred from these numerical simulations that fuel injection is followed by intense premixing with hot combustion products in the primary zone and combustion reaction follows further downstream. Reaction rate contours suggest that reaction takes place over a large volume and the magnitude of the combustion reaction is lower compared to the conventional combustion mode. The appearance of attached flames in the mild combustion burners at low thermal inputs is also predicted, which is due to lower average jet velocity and larger residence times in the near injection zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time and hybrid systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict-tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based scheme forcomposing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an algorithm for solid model reconstruction from 2D sectional views based on volume-based approach. None of the existing work in automatic reconstruction from 2D orthographic views have addressed sectional views in detail. It is believed that the volume-based approach is better suited to handle different types of sectional views. The volume-based approach constructs the 3D solid by a boolean combination of elementary solids. The elementary solids are formed by sweep operation on loops identified in the input views. The only adjustment to be made for the presence of sectional views is in the identification of loops that would form the elemental solids. In the algorithm, the conventions of engineering drawing for sectional views, are used to identify the loops correctly. The algorithm is simple and intuitive in nature. Results have been obtained for full sections, offset sections and half sections. Future work will address other types of sectional views such as removed and revolved sections and broken-out sections. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.