977 resultados para Modeling complexity
Resumo:
A method is presented to model server unreliability in closed queuing networks. Breakdowns and repairs of servers, assumed to be time-dependent, are modeled using virtual customers and virtual servers in the system. The problem is thus converted into a closed queue with all reliable servers and preemptive resume priority centers. Several recent preemptive priority approximations and an approximation of the one proposed are used in the analysis. This method has approximately the same computational requirements as that of mean-value analysis for a network of identical dimensions and is therefore very efficient
Resumo:
Yhteenveto: Kemikaalien teollisesta käsittelystä vesieliöille aiheutuvien riskien arviointi mallin avulla.
Resumo:
In this paper, Space-Time Block Codes (STBCs) with reduced Sphere Decoding Complexity (SDC) are constructed for two-user Multiple-Input Multiple-Output (MIMO) fading multiple access channels. In this set-up, both the users employ identical STBCs and the destination performs sphere decoding for the symbols of the two users. First, we identify the positions of the zeros in the R matrix arising out of the Q-R decomposition of the lattice generator such that (i) the worst case SDC (WSDC) and (ii) the average SDC (ASDC) are reduced. Then, a set of necessary and sufficient conditions on the lattice generator is provided such that the R matrix has zeros at the identified positions. Subsequently, explicit constructions of STBCs which results in the reduced ASDC are presented. The rate (in complex symbols per channel use) of the proposed designs is at most 2/N-t where N-t denotes the number of transmit antennas for each user. We also show that the class of STBCs from complex orthogonal designs (other than the Alamouti design) reduce the WSDC but not the ASDC.
A Low ML-Decoding Complexity, High Coding Gain, Full-Rate, Full-Diversity STBC for 4 x 2 MIMO System
Resumo:
This paper proposes a full-rate, full-diversity space-time block code(STBC) with low maximum likelihood (ML) decoding complexity and high coding gain for the 4 transmit antenna, 2 receive antenna (4 x 2) multiple-input multiple-output (MIMO) system that employs 4/16-QAM. For such a system, the best code known is the DjABBA code and recently, Biglieri, Hong and Viterbo have proposed another STBC (BHV code) for 4-QAM which has lower ML-decoding complexity than the DjABBA code but does not have full-diversity like the DjABBA code. The code proposed in this paper has the same ML-decoding complexity as the BHV code for any square M-QAM but has full-diversity for 4- and 16-QAM. Compared with the DjABBA code, the proposed code has lower ML-decoding complexity for square M-QAM constellation, higher coding gain for 4- and 16-QAM, and hence a better codeword error rate (CER) performance. Simulation results confirming this are presented.
Resumo:
Plastic-coated paper is shown to possess reflectivity characteristics quite similar to those of the surface of water. This correspondence has been used with a conversion factor to model a sea surface by means of plastic-coated paper. Such a paper model is then suitably illuminated and photographed, yielding physically simulated daylight imagery of the sea surface under controlled conditions. A simple example of sinusoidal surface simulation is described.
Resumo:
Thixocasting requires manufacturing of billets with non-dendritic microstructure. Aluminum alloy A356 billets were produced by rheocasting in a mould placed inside a linear electromagnetic stirrer. Subsequent heat treatment was used to produce a transition from rosette to globular microstructure. The current and the duration of stirring were explored as control parameters. Simultaneous induction heating of the billet during stirring was quantified using experimentally determined thermal profiles. The effect of processing parameters on the dendrite fragmentation was discussed. Corresponding computational modeling of the process was performed using phase-field modeling of alloy solidification in order to gain insight into the process of morphological changes of a solid during this process. A non-isothermal alloy solidification model was used for simulations. The morphological evolution under such imposed thermal cycles was simulated and compared with experimentally determined one. Suitable scaling using the thermosolutal diffusion distances was used to overcome computational difficulties in quantitative comparison at system scale. The results were interpreted in the light of existing theories of microstructure refinement and globularisation.
Resumo:
This paper reports the structural behavior and thermodynamics of the complexation of siRNA with poly(amidoamine) (PAMAM) dendrimers of generation 3 (G3) and 4 (G4) through fully atomistic molecular dynamics (MD) simulations accompanied by free energy calculations and inherent structure determination. We have also done simulation with one siRNA and two dendrimers (2 x G3 or 2xG4) to get the microscopic picture of various binding modes. Our simulation results reveal the formation of stable siRNA-dendrimer complex over nanosecond time scale. With the increase in dendrimcr generation, the charge ratio increases and hence the binding energy between siRNA and dendrimer also increases in accordance with available experimental measurements. Calculated radial distribution functions of amines groups of various subgenerations in a given generation of dendrimer and phosphate in backbone of siRNA reveals that one dendrimer of generation 4 shows better binding with siRNA almost wrapping the dendrimer when compared to the binding with lower generation dendrimer like G3. In contrast, two dendrimers of generation 4 show binding without siRNA wrapping the den-rimer because of repulsion between two dendrimers. The counterion distribution around the complex and the water molecules in the hydration shell of siRNA give microscopic picture of the binding dynamics. We see a clear correlation between water. counterions motions and the complexation i.e. the water molecules and counterions which condensed around siRNA are moved away from the siRNA backbone when dendrimer start binding to the siRNA back hone. As siRNA wraps/bind to the dendrimer counterions originally condensed onto siRNA (Na-1) and dendrimer (Cl-) get released. We give a quantitative estimate of the entropy of counterions and show that there is gain in entropy due to counterions release during the complexation. Furthermore, the free energy of complexation of IG3 and IG4 at two different salt concentrations shows that increase in salt concentration leads to the weakening of the binding affinity of siRNA and dendrimer.
Resumo:
The design of present generation uncooled Hg1-xCdxTe infrared photon detectors relies on complex heterostructures with a basic unit cell of type (n) under bar (+)/pi/(p) under bar (+). We present an analysis of double barrier (n) under bar (+)/pi/(p) under bar (+) mid wave infrared (x = 0.3) HgCdTe detector for near room temperature operation using numerical computations. The present work proposes an accurate and generalized methodology in terms of the device design, material properties, and operation temperature to study the effects of position dependence of carrier concentration, electrostatic potential, and generation-recombination (g-r) rates on detector performance. Position dependent profiles of electrostatic potential, carrier concentration, and g-r rates were simulated numerically. Performance of detector was studied as function of doping concentration of absorber and contact layers, width of both layers and minority carrier lifetime. Responsivity similar to 0.38 A W-1, noise current similar to 6 x 10(-14) A/Hz(1/2) and D* similar to 3.1 x 10(10)cm Hz(1/2) W-1 at 0.1 V reverse bias have been calculated using optimized values of doping concentration, absorber width and carrier lifetime. The suitability of the method has been illustrated by demonstrating the feasibility of achieving the optimum device performance by carefully selecting the device design and other parameters. (C) 2010 American Institute of Physics. doi:10.1063/1.3463379]
Resumo:
We present a low-complexity algorithm for intrusion detection in the presence of clutter arising from wind-blown vegetation, using Passive Infra-Red (PIR) sensors in a Wireless Sensor Network (WSN). The algorithm is based on a combination of Haar Transform (HT) and Support-Vector-Machine (SVM) based training and was field tested in a network setting comprising of 15-20 sensing nodes. Also contained in this paper is a closed-form expression for the signal generated by an intruder moving at a constant velocity. It is shown how this expression can be exploited to determine the direction of motion information and the velocity of the intruder from the signals of three well-positioned sensors.
Resumo:
In this dissertation I study language complexity from a typological perspective. Since the structuralist era, it has been assumed that local complexity differences in languages are balanced out in cross-linguistic comparisons and that complexity is not affected by the geopolitical or sociocultural aspects of the speech community. However, these assumptions have seldom been studied systematically from a typological point of view. My objective is to define complexity so that it is possible to compare it across languages and to approach its variation with the methods of quantitative typology. My main empirical research questions are: i) does language complexity vary in any systematic way in local domains, and ii) can language complexity be affected by the geographical or social environment? These questions are studied in three articles, whose findings are summarized in the introduction to the dissertation. In order to enable cross-language comparison, I measure complexity as the description length of the regularities in an entity; I separate it from difficulty, focus on local instead of global complexity, and break it up into different types. This approach helps avoid the problems that plagued earlier metrics of language complexity. My approach to grammar is functional-typological in nature, and the theoretical framework is basic linguistic theory. I delimit the empirical research functionally to the marking of core arguments (the basic participants in the sentence). I assess the distributions of complexity in this domain with multifactorial statistical methods and use different sampling strategies, implementing, for instance, the Greenbergian view of universals as diachronic laws of type preference. My data come from large and balanced samples (up to approximately 850 languages), drawn mainly from reference grammars. The results suggest that various significant trends occur in the marking of core arguments in regard to complexity and that complexity in this domain correlates with population size. These results provide evidence that linguistic patterns interact among themselves in terms of complexity, that language structure adapts to the social environment, and that there may be cognitive mechanisms that limit complexity locally. My approach to complexity and language universals can therefore be successfully applied to empirical data and may serve as a model for further research in these areas.
Resumo:
In this work a physically based analytical quantum threshold voltage model for the triple gate long channel metal oxide semiconductor field effect transistor is developed The proposed model is based on the analytical solution of two-dimensional Poisson and two-dimensional Schrodinger equation Proposed model is extended for short channel devices by including semi-empirical correction The impact of effective mass variation with film thicknesses is also discussed using the proposed model All models are fully validated against the professional numerical device simulator for a wide range of device geometries (C) 2010 Elsevier Ltd All rights reserved
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.