69 resultados para Imputation model approach
em Indian Institute of Science - Bangalore - Índia
Resumo:
We present a simplified yet analytical formulation of the carrier backscattering coefficient for zig-zag semiconducting single walled carbon nanotubes under diffusive regime. The electron-phonon scattering rate for longitudinal acoustic, optical, and zone-boundary phonon emissions for both inter- and intrasubband transition rates have been derived using Kane's nonparabolic energy subband model.The expressions for the mean free path and diffusive resistance have been formulated incorporating the aforementioned phonon scattering. Appropriate overlap function in Fermi's golden rule has been incorporated for a more general approach. The effect of energy subbands on low and high bias zones for the onset of longitudinal acoustic, optical, and zone-boundary phonon emissions and absorption have been analytically addressed. 90% transmission of the carriers from the source to the drain at 400 K for a 5 mu m long nanotube at 105 V m(-1) has been exhibited. The analytical results are in good agreement with the available experimental data. (c) 2010 American Institute of Physics.
Resumo:
We build dynamic models of community assembly by starting with one species in our model ecosystem and adding colonists. We find that the number of species present first increases, then fluctuates about some level. We ask: how large are these fluctuations and how can we characterize them statistically? As in Robert May's work, communities with weaker interspecific interactions permit a greater number of species to coexist on average. We find that as this average increases, however, the relative variation in the number of species and return times to mean community levels decreases. In addition, the relative frequency of large extinction events to small extinction events decreases as mean community size increases. While the model reproduces several of May's results, it also provides theoretical support for Charles Elton's idea that diverse communities such as those found in the tropics should be less variable than depauperate communities such as those found in arctic or agricultural settings.
Resumo:
We address the problem of robust formant tracking in continuous speech in the presence of additive noise. We propose a new approach based on mixture modeling of the formant contours. Our approach consists of two main steps: (i) Computation of a pyknogram based on multiband amplitude-modulation/frequency-modulation (AM/FM) decomposition of the input speech; and (ii) Statistical modeling of the pyknogram using mixture models. We experiment with both Gaussian mixture model (GMM) and Student's-t mixture model (tMM) and show that the latter is robust with respect to handling outliers in the pyknogram data, parameter selection, accuracy, and smoothness of the estimated formant contours. Experimental results on simulated data as well as noisy speech data show that the proposed tMM-based approach is also robust to additive noise. We present performance comparisons with a recently developed adaptive filterbank technique proposed in the literature and the classical Burg's spectral estimator technique, which show that the proposed technique is more robust to noise.
Resumo:
Context-aware computing is useful in providing individualized services focusing mainly on acquiring surrounding context of user. By comparison, only very little research has been completed in integrating context from different environments, despite of its usefulness in diverse applications such as healthcare, M-commerce and tourist guide applications. In particular, one of the most important criteria in providing personalized service in a highly dynamic environment and constantly changing user environment, is to develop a context model which aggregates context from different domains to infer context of an entity at the more abstract level. Hence, the purpose of this paper is to propose a context model based on cognitive aspects to relate contextual information that better captures the observation of certain worlds of interest for a more sophisticated context-aware service. We developed a C-IOB (Context-Information, Observation, Belief) conceptual model to analyze the context data from physical, system, application, and social domains to infer context at the more abstract level. The beliefs developed about an entity (person, place, things) are primitive in most theories of decision making so that applications can use these beliefs in addition to history of transaction for providing intelligent service. We enhance our proposed context model by further classifying context information into three categories: a well-defined, a qualitative and credible context information to make the system more realistic towards real world implementation. The proposed model is deployed to assist a M-commerce application. The simulation results show that the service selection and service delivery of the system are high compared to traditional system.
Resumo:
Commercialization efforts to diffuse sustainable energy technologies (SETs1) have so far remained as the biggest challenge in the field of renewable energy and energy efficiency. Limited success of diffusion through government driven pathways urges the need for market based approaches. This paper reviews the existing state of commercialization of SETs in the backdrop of the basic theory of technology diffusion. The different SETs in India are positioned in the technology diffusion map to reflect their slow state of commercialization. The dynamics of SET market is analysed to identify the issues, barriers and stakeholders in the process of SET commercialization. By upgrading the ‘potential adopters’ to ‘techno-entrepreneurs’, the study presents the mechanisms for adopting a private sector driven ‘business model’ approach for successful diffusion of SETs. This is expected to integrate the processes of market transformation and entrepreneurship development with innovative regulatory, marketing, financing, incentive and delivery mechanisms leading to SET commercialization.
Resumo:
The paper presents the importance of the Nocturnal Boundary Layer in driving the diurnal variability of the atmospheric CO2 mixing ratio and the carbon isotope ratio at ground level from an urban station in India. Our observations are the first of their kind from this region. The atmospheric CO2 mixing ratio and the carbon isotopic ratio were measured for both the morning (05:30-07:30 IST) and afternoon time (16:00-18:00 IST) air samples at 5 m above ground level in Bangalore city, Karnataka State (12 degrees 58' N, 77 degrees 38' E, masl = 920 m) for a 10 day period during the winter of 2008. We observed a change of similar to 7% the in CO2 mixing ratio between the morning and afternoon time air samples. A stable isotope analysis of CO2 from morning samples showed a depletion in the carbon isotope ratio by similar to 2 parts per thousand compared to the afternoon samples. Along with the ground-based measurement of air samples, data of radiosonde measurements were also obtained from the Indian Meteorological Department to identify the vertical atmospheric structure at different time in a day. We proposed the presence or absence of the NBL as a controlling factor for the observed variability in the mixing ratio as well as its isotopic composition. Here we used the Keeling model approach to find out the carbon isotope ratio for the local sources. The local sources have further been characterized as anthropogenic and biological respiration (in %) using a two-component mixing model. We also used a vertical mixing model based on the concept of the mixing of isotopically depleted (carbon isotope) ``polluted air'' (PA) with isotopically enriched ``free atmospheric air'' (FA) above. Using this modeling approach, the contribution of FA at ground level is being estimated for both the morning and afternoon time air samples.
Resumo:
The solvent plays a decisive role in the photochemistry and photophysics of aromatic ketones. Xanthone (XT) is one such aromatic ketone and its triplet-triplet (T-T) absorption spectra show intriguing solvatochromic behavior. Also, the reactivity of XT towards H-atom abstraction shows an unprecedented decrease in protic solvents relative to aprotic solvents. Therefore, a comprehensive solvatochromic analysis of the triplet-triplet absorption spectra of XT was carried out in conjunction with time dependent density functional theory using the ad hoc explicit solvent model approach. A detailed solvatochromic analysis of the T-T absorption bands of XT suggests that the hydrogen bonding interactions are different in the corresponding triplet excited states. Furthermore, the contributions of non-specific and hydrogen bonding interactions towards differential solvation of the triplet states in protic solvents were found to be of equal magnitude. The frontier molecular orbital and electron density difference analysis of the T-1 and T-2 states of XT indicates that the charge redistribution in these states leads to intermolecular hydrogen bond strengthening and weakening, respectively, relative to the S-0 state. This is further supported by the vertical excitation energy calculations of the XT-methanol supra-molecular complex. The intermolecular hydrogen bonding potential energy curves obtained for this complex in the S-0, T-1, and T-2 states support the model. In summary, we propose that the different hydrogen bonding mechanisms exhibited by the two lowest triplet excited states of XT result in a decreasing role of the n pi* triplet state, and are thus responsible for its reduced reactivity towards H-atom abstraction in protic solvents. (C) 2016 AIP Publishing LLC.
Resumo:
The structure of real glasses has been considered to be microheterogeneous, composed of clusters and connective tissue. Particles in the cluster are assumed to be highly correlated in positions. The tissue is considered to have a truly amorphous structure with its particles vibrating in highly anharmonic potentials. Glass transition is recognized as corresponding to the melting of clusters. A simple mathematical model has been developed which accounts for various known features associated with glass transition, such as range of glass transition temperature,T g, variation ofT g with pressure, etc. Expressions for configurational thermodynamic properties and transport properties of glass forming systems are derived from the model. The relevence and limitations of the model are also discussed.
Resumo:
Fuzzy Waste Load Allocation Model (FWLAM), developed in an earlier study, derives the optimal fractional levels, for the base flow conditions, considering the goals of the Pollution Control Agency (PCA) and dischargers. The Modified Fuzzy Waste Load Allocation Model (MFWLAM) developed subsequently is a stochastic model and considers the moments (mean, variance and skewness) of water quality indicators, incorporating uncertainty due to randomness of input variables along with uncertainty due to imprecision. The risk of low water quality is reduced significantly by using this modified model, but inclusion of new constraints leads to a low value of acceptability level, A, interpreted as the maximized minimum satisfaction in the system. To improve this value, a new model, which is a combination Of FWLAM and MFWLAM, is presented, allowing for some violations in the constraints of MFWLAM. This combined model is a multiobjective optimization model having the objectives, maximization of acceptability level and minimization of violation of constraints. Fuzzy multiobjective programming, goal programming and fuzzy goal programming are used to find the solutions. For the optimization model, Probabilistic Global Search Lausanne (PGSL) is used as a nonlinear optimization tool. The methodology is applied to a case study of the Tunga-Bhadra river system in south India. The model results in a compromised solution of a higher value of acceptability level as compared to MFWLAM, with a satisfactory value of risk. Thus the goal of risk minimization is achieved with a comparatively better value of acceptability level.
Resumo:
A nonlinear suboptimal guidance scheme is developed for the reentry phase of the reusable launch vehicles. A recently developed methodology, named as model predictive static programming (MPSP), is implemented which combines the philosophies of nonlinear model predictive control theory and approximate dynamic programming. This technique provides a finite time nonlinear suboptimal guidance law which leads to a rapid solution of the guidance history update. It does not have to suffer from computational difficulties and can be implemented online. The system dynamics is propagated through the flight corridor to the end of the reentry phase considering energy as independent variable and angle of attack as the active control variable. All the terminal constraints are satisfied. Among the path constraints, the normal load is found to be very constrictive. Hence, an extra effort has been made to keep the normal load within a specified limit and monitoring its sensitivity to the perturbation.
Resumo:
Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.
Resumo:
The paper deals with a model-theoretic approach to clustering. The approach can be used to generate cluster description based on knowledge alone. Such a process of generating descriptions would be extremely useful in clustering partially specified objects. A natural byproduct of the proposed approach is that missing values of attributes of an object can be estimated with ease in a meaningful fashion. An important feature of the approach is that noisy objects can be detected effectively, leading to the formation of natural groups. The proposed algorithm is applied to a library database consisting of a collection of books.
Resumo:
We use the extended Hubbard model to investigate the properties of the charge- and spin-density-wave phases in the presence of a nearest-neighbors repulsion term in the framework of the slave-boson technique. We show that, contrary to Hartree-Fock results, an instablity may occur for sufficiently high values of the Hubbard repulsion, both in the spin- and charge-density-wave phase, which makes the system discontinuously jump to a phase with a smaller or zero wave amplitude. The limits of applicability of our approach are discussed and our results are compared with previous numerical analysis. The phase diagram of the model at half-filling is determined.
Resumo:
A modified lattice model using finite element method has been developed to study the mode-I fracture analysis of heterogeneous materials like concrete. In this model, the truss members always join at points where aggregates are located which are modeled as plane stress triangular elements. The truss members are given the properties of cement mortar matrix randomly, so as to represent the randomness of strength in concrete. It is widely accepted that the fracture of concrete structures should not be based on strength criterion alone, but should be coupled with energy criterion. Here, by incorporating the strain softening through a parameter ‘α’, the energy concept is introduced. The softening branch of load-displacement curves was successfully obtained. From the sensitivity study, it was observed that the maximum load of a beam is most sensitive to the tensile strength of mortar. It is seen that by varying the values of properties of mortar according to a normal random distribution, better results can be obtained for load-displacement diagram.
Resumo:
Based on dynamic inversion, a relatively straightforward approach is presented in this paper for nonlinear flight control design of high performance aircrafts, which does not require the normal and lateral acceleration commands to be first transferred to body rates before computing the required control inputs. This leads to substantial improvement of the tracking response. Promising results are obtained from six degree-offreedom simulation studies of F-16 aircraft, which are found to be superior as compared to an existing approach (which is also based on dynamic inversion). The new approach has two potential benefits, namely reduced oscillatory response (including elimination of non-minimum phase behavior) and reduced control magnitude. Next, a model-following neuron-adaptive design is augmented the nominal design in order to assure robust performance in the presence of parameter inaccuracies in the model. Note that in the approach the model update takes place adaptively online and hence it is philosophically similar to indirect adaptive control. However, unlike a typical indirect adaptive control approach, there is no need to update the individual parameters explicitly. Instead the inaccuracy in the system output dynamics is captured directly and then used in modifying the control. This leads to faster adaptation, which helps in stabilizing the unstable plant quicker. The robustness study from a large number of simulations shows that the adaptive design has good amount of robustness with respect to the expected parameter inaccuracies in the model.