31 resultados para Agent-Based Modeling
Resumo:
In a networked society, governing advocacy groups and networks through decentralized systems of policy implementation has been the interest of governance network literature. This paper addresses the topic of governing networks in the context of Indian agrarian societies by taking the case example of a welfare scheme for the Indian rural poor. We explore context-specific regulatory dynamics through the situated agent based architectural framework. The effects of various regulatory strategies that can be adopted by governing node are tested under various action arenas through experimental design. Results show the impact of regulatory strategies on the resource dependencies and asymmetries in the network relationships. This indicates that the optimal feasible regulatory strategy in networked society is institutionally rational and is context dependent. Further, we show that situated MAS architecture is a natural fit for institutional understanding of the dynamics (Ostrom et al. in Rules, games, and common-pool resources, 1994).
Resumo:
Effect of stress and interface defects on photo luminescence property of a silicon nano-crystal (Si-nc) embedded in amorphous silicon dioxide (a-SiO2) are studied in this paper using a self-consistent quantum-continuum based modeling framework. Si-ncs or quantum dots show photoluminescence at room temperature. Whether its origin is due to Si-nc/a-SiO2 interface defects or quantum confinement of carriers in Si-nc is still an outstanding question. Earlier reports have shown that stresses greater than 12 GPa change the indirect energy band gap structure of bulk Si to a direct energy band gap structure. Such stresses are observed very often in nanostructures and these stresses influence the carrier confinement energy significantly. Hence, it is important to determine the effect of stress in addition to the structure of interface defects on photoluminescence property of Si-nc. In the present work, first a Si-nc embedded in a-SiO2 is constructed using molecular dynamics simulation framework considering the actual conditions they are grown so that the interface and residual stress in the structure evolves naturally during formation. We observe that the structure thus created has an interface of about 1 nm thick consisting of 41.95% of defective states mostly Sin+ (n = 0 to 3) coordination states. Further, both the Si-nc core and the embedding matrix are observed to be under a compressive strain. This residual strain field is applied in an effective mass k.p Hamiltonian formulation to determine the energy states of the carriers. The photo luminescence property computed based on the carrier confinement energy and interface energy states associated with defects will be analysed in details in the paper.
Resumo:
Human Leukocyte Antigen (HLA) plays an important role, in presenting foreign pathogens to our immune system, there by eliciting early immune responses. HLA genes are highly polymorphic, giving rise to diverse antigen presentation capability. An important factor contributing to enormous variations in individual responses to diseases is differences in their HLA profiles. The heterogeneity in allele specific disease responses decides the overall disease epidemiological outcome. Here we propose an agent based computational framework, capable of incorporating allele specific information, to analyze disease epidemiology. This framework assumes a SIR model to estimate average disease transmission and recovery rate. Using epitope prediction tool, it performs sequence based epitope detection for a given the pathogenic genome and derives an allele specific disease susceptibility index depending on the epitope detection efficiency. The allele specific disease transmission rate, that follows, is then fed to the agent based epidemiology model, to analyze the disease outcome. The methodology presented here has a potential use in understanding how a disease spreads and effective measures to control the disease.
Resumo:
An open question within the Bienenstock-Cooper-Munro theory for synaptic modification concerns the specific mechanism that is responsible for regulating the sliding modification threshold (SMT). In this conductance-based modeling study on hippocampal pyramidal neurons, we quantitatively assessed the impact of seven ion channels (R- and T-type calcium, fast sodium, delayed rectifier, A-type, and small-conductance calcium-activated (SK) potassium and HCN) and two receptors (AMPAR and NMDAR) on a calcium-dependent Bienenstock-Cooper-Munro-like plasticity rule. Our analysis with R- and T-type calcium channels revealed that differences in their activation-inactivation profiles resulted in differential impacts on how they altered the SMT. Further, we found that the impact of SK channels on the SMT critically depended on the voltage dependence and kinetics of the calcium sources with which they interacted. Next, we considered interactions among all the seven channels and the two receptors through global sensitivity analysis on 11 model parameters. We constructed 20,000 models through uniform randomization of these parameters and found 360 valid models based on experimental constraints on their plasticity profiles. Analyzing these 360 models, we found that similar plasticity profiles could emerge with several nonunique parametric combinations and that parameters exhibited weak pairwise correlations. Finally, we used seven sets of virtual knock-outs on these 360 models and found that the impact of different channels on the SMT was variable and differential. These results suggest that there are several nonunique routes to regulate the SMT, and call for a systematic analysis of the variability and state dependence of the mechanisms underlying metaplasticity during behavior and pathology.
Resumo:
Visual tracking has been a challenging problem in computer vision over the decades. The applications of Visual Tracking are far-reaching, ranging from surveillance and monitoring to smart rooms. Mean-shift (MS) tracker, which gained more attention recently, is known for tracking objects in a cluttered environment and its low computational complexity. The major problem encountered in histogram-based MS is its inability to track rapidly moving objects. In order to track fast moving objects, we propose a new robust mean-shift tracker that uses both spatial similarity measure and color histogram-based similarity measure. The inability of MS tracker to handle large displacements is circumvented by the spatial similarity-based tracking module, which lacks robustness to object's appearance change. The performance of the proposed tracker is better than the individual trackers for tracking fast-moving objects with better accuracy.
Resumo:
In this paper we develop a Linear Programming (LP) based decentralized algorithm for a group of multiple autonomous agents to achieve positional consensus. Each agent is capable of exchanging information about its position and orientation with other agents within their sensing region. The method is computationally feasible and easy to implement. Analytical results are presented. The effectiveness of the approach is illustrated with simulation results.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%
Resumo:
The problem of on-line recognition and retrieval of relatively weak industrial signals such as partial discharges (PD), buried in excessive noise, has been addressed in this paper. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) due to the overlapping broad band frequency spectrum of PI and PD pulses. Therefore, on-line, onsite, PD measurement is hardly possible in conventional frequency based DSP techniques. The observed PD signal is modeled as a linear combination of systematic and random components employing probabilistic principal component analysis (PPCA) and the pdf of the underlying stochastic process is obtained. The PD/PI pulses are assumed as the mean of the process and modeled instituting non-parametric methods, based on smooth FIR filters, and a maximum aposteriori probability (MAP) procedure employed therein, to estimate the filter coefficients. The classification of the pulses is undertaken using a simple PCA classifier. The methods proposed by the authors were found to be effective in automatic retrieval of PD pulses completely rejecting PI.
Resumo:
We address the problem of recognition and retrieval of relatively weak industrial signal such as Partial Discharges (PD) buried in excessive noise. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) which has similar time-frequency characteristics as PD pulse. Therefore conventional frequency based DSP techniques are not useful in retrieving PD pulses. We employ statistical signal modeling based on combination of long-memory process and probabilistic principal component analysis (PPCA). An parametric analysis of the signal is exercised for extracting the features of desired pules. We incorporate a wavelet based bootstrap method for obtaining the noise training vectors from observed data. The procedure adopted in this work is completely different from the research work reported in the literature, which is generally based on deserved signal frequency and noise frequency.
Resumo:
This study presents development of a computational fluid dynamic (CFD) model to predict unsteady, two-dimensional temperature, moisture and velocity distributions inside a novel, biomass-fired, natural convection-type agricultural dryer. Results show that in initial stages of drying, when material surface is wet and moisture is easily available, moisture removal rate from surface depends upon the condition of drying air. Subsequently, material surface becomes dry and moisture removal rate is driven by diffusion of moisture from inside to the material surface. An optimum 9-tray configuration is found to be more efficient than for the same mass of material and volume of dryer. A new configuration of dryer, mainly to explore its potential to increasing uniformity in drying across all trays, is also analyzed. This configuration involves diverting a portion of hot air before it enters over the first tray and is supplied directly at an intermediate location in the dryer. Uniformity in drying across trays has increased for the kind of material simulated.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) is a very effective tool for designing trade-offs between energy and performance. In this paper, we use a formal Petri net based program performance model that directly captures both the application and system properties, to find energy efficient DVFS settings for CMP systems, that satisfy a given performance constraint, for SPMD multithreaded programs. Experimental evaluation shows that we achieve significant energy savings, while meeting the performance constraints.
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
It is a well-known fact that most of the developing countries have intermittent water supply and the quantity of water supplied from the source is also not distributed equitably among the consumers. Aged pipelines, pump failures, and improper management of water resources are some of the main reasons for it. This study presents the application of a nonlinear control technique to overcome this problem in different zones in the city of Bangalore. The water is pumped to the city from a large distance of approximately 100km over a very high elevation of approximately 400m. The city has large undulating terrain among different zones, which leads to unequal distribution of water. The Bangalore, inflow water-distribution system (WDS) has been modeled. A dynamic inversion (DI) nonlinear controller with proportional integral derivative (PID) features (DI-PID) is used for valve throttling to achieve the target flows to different zones of the city. This novel approach of equitable water distribution using DI-PID controllers that can be used as a decision support system is discussed in this paper.
Resumo:
The problem of estimation of the time-variant reliability of actively controlled structural dynamical systems under stochastic excitations is considered. Monte Carlo simulations, reinforced with Girsanov transformation-based sampling variance reduction, are used to tackle the problem. In this approach, the external excitations are biased by an additional artificial control force. The conflicting objectives of the two control forces-one designed to reduce structural responses and the other to promote limit-state violations (but to reduce sampling variance)-are noted. The control for variance reduction is fashioned after design-point oscillations based on a first-order reliability method. It is shown that for structures that are amenable to laboratory testing, the reliability can be estimated experimentally with reduced testing times by devising a procedure based on the ideas of the Girsanov transformation. Illustrative examples include studies on a building frame with a magnetorheologic damper-based isolation system subject to nonstationary random earthquake excitations. (C) 2014 American Society of Civil Engineers.