34 resultados para Two Approaches
em Indian Institute of Science - Bangalore - Índia
Resumo:
Proximity of molecules is a crucial factor in many solid- state photochemical processes.'S2 The biomolecular photodimerization reactions in the solid state depend on the relative geometry of reactant molecules in the crystal lattice with center-to-center distance of nearest neighbor double bonds of the order of ca. 4 A. This fact emanates from the incisive studies of Schmidt and Cohen.2 One of the two approaches to achieve this distance requirement is the so-called "Crystal-Engineering" of structures, which essentially involves the introduction of certain functional groups that display in-plane interstacking interactions (Cl...Cl, C-He-0, etc.) in the crystal The chloro group is by far the most successful in promoting the /3- packing m ~ d e ,th~o,u~gh recent studies have shown its limitations? Another approach involves the use of constrained media in which the reactants could hopefully be aligned.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
The saturated liquid density, varrholr, data along the liquid vapour coexistence curve published in the literature for several cryogenic liquids, hydrocarbons and halocarbon refrigerants are fitted to a generalized equation of the following form varrholr = 1 + A(1 − Tr + B(1 − Tr)β The values of β, the index in phase density differences power law, have been obtained by means of two approaches namely statistical treatment of saturated fluid phase density difference data and the existence of a maximum in T(varrho1 − varrhov) along the saturation curve. Values of the constants A and B are determined utilizing the fact that Tvarrho1 has a maximum at a characteristic temperature T. Values of A, B and β are tabulated for Ne, Ar, Kr, Xe, N2, O2, methane, ethane, propane, iso-butane, n-butane, propylene, ethylene, CO2, water, ammonia, refrigerants-11, 12, 12B1, 13, 13B1, 14, 21, 22, 23, 32, 40, 113, 114, 115, 142b, 152a, 216, 245 and azeotropes R-500, 502, 503, 504. The average error of prediction is less than 2%.
Resumo:
Hyper-redundant robots are characterized by the presence of a large number of actuated joints, many more than the number required to perform a given task. These robots have been proposed and used for many applications involving avoiding obstacles or, in general, to provide enhanced dexterity in performing tasks. Making effective use of the extra degrees of freedom or resolution of redundancy has been an extensive topic of research and several methods have been proposed in literature. In this paper, we compare three known methods and show that an algorithm based on a classical curve called the tractrix leads to a more 'natural' motion of the hyper-redundant robot, with the displacements diminishing from the end-effector to the fixed base. In addition, since the actuators nearer the base 'see' a greater inertia due to the links farther away, smaller motion of the actuators nearer the base results in better motion of the end-effector as compared to other two approaches. We present simulation and experimental results performed on a prototype eight link planar hyper-redundant manipulator.
Resumo:
The bipolar point spread function (PSF) corresponding to the Wiener filter tor correcting linear-motion-blurred pictures is implemented in a noncoherent optical processor. The following two approaches are taken for this implementation: (1) the PSF is modulated and biased so that the resulting function is non-negative and (2) the PSF is split into its positive and sign-reversed negative parts, and these two parts are dealt with separately. The phase problem associated with arriving at the pupil function from these modified PSFs is solved using both analytical and combined analytical-iterative techniques available in the literature. The designed pupil functions are experimentally implemented, and deblurring in a noncoherent processor is demonstrated. The postprocessing required (i.e., demodulation in the first approach to modulating the PSF and intensity subtraction in the second approach) are carried out either in a coherent processor or with the help of a PC-based vision system. The deblurred outputs are presented.
Resumo:
Precipitation in small droplets involving emulsions, microemulsions or vesicles is important for Producing multicomponent ceramics and nanoparticles. Because of the random nature of nucleation and the small number of particles in a droplet, the use of a deterministic population balance equation for predicting the number density of particles may lead to erroneous results even for evaluating the mean behavior of such systems. A comparison between the predictions made through stochastic simulation and deterministic population balance involving small droplets has been made for two simple systems, one involving crystallization and the other a single-component precipitation. The two approaches have been found to yield quite different results under a variety of conditions. Contrary to expectation, the smallness of the population alone does not cause these deviations. Thus, if fluctuation in supersaturation is negligible, the population balance and simulation predictions concur. However, for large fluctuations in supersaturation, the predictions differ significantly, indicating the need to take the stochastic nature of the phenomenon into account. This paper describes the stochastic treatment of populations, which involves a sequence of so-called product density equations and forms an appropriate framework for handling small systems.
Resumo:
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved which we refer as WCO mechanism. We measure the performance of such mechanisms by the redistribution index. We first prove an impossibility theorem which rules out linear rebate functions with non-zero redistribution index in heterogeneous object assignment. Motivated by this theorem,we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero redistribution index are possible when the valuations for the objects have a certain type of relationship and we design a mechanism with linear rebate function that is worst case optimal. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed. We extend the rebate functions of the WCO mechanism to heterogeneous objects assignment and conjecture them to be worst case optimal.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
The statistical thermodynamics of adsorption in caged zeolites is developed by treating the zeolite as an ensemble of M identical cages or subsystems. Within each cage adsorption is assumed to occur onto a lattice of n identical sites. Expressions for the average occupancy per cage are obtained by minimizing the Helmholtz free energy in the canonical ensemble subject to the constraints of constant M and constant number of adsorbates N. Adsorbate-adsorbate interactions in the Brag-Williams or mean field approximation are treated in two ways. The local mean field approximation (LMFA) is based on the local cage occupancy and the global mean field approximation (GMFA) is based on the average coverage of the ensemble. The GMFA is shown to be equivalent in formulation to treating the zeolite as a collection of interacting single site subsystems. In contrast, the treatment in the LMFA retains the description of the zeolite as an ensemble of identical cages, whose thermodynamic properties are conveniently derived in the grand canonical ensemble. For a z coordinated lattice within the zeolite cage, with epsilon(aa) as the adsorbate-adsorbate interaction parameter, the comparisons for different values of epsilon(aa)(*)=epsilon(aa)z/2kT, and number of sites per cage, n, illustrate that for -1
Resumo:
The growth and dissolution dynamics of nonequilibrium crystal size distributions (CSDs) can be determined by solving the governing population balance equations (PBEs) representing reversible addition or dissociation. New PBEs are considered that intrinsically incorporate growth dispersion and yield complete CSDs. We present two approaches to solving the PBEs, a moment method and a numerical scheme. The results of the numerical scheme agree with the moment technique, which can be solved exactly when powers on mass-dependent growth and dissolution rate coefficients are either zero or one. The numerical scheme is more general and can be applied when the powers of the rate coefficients are non-integers or greater than unity. The influence of the size dependent rates on the time variation of the CSDs indicates that as equilibrium is approached, the CSDs become narrow when the exponent on the growth rate is less than the exponent on the dissolution rate. If the exponent on the growth rate is greater than the exponent on the dissolution rate, then the polydispersity continues to broaden. The computation method applies for crystals large enough that interfacial stability issues, such as ripening, can be neglected. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
instead of using chemical-reducing agents to facilitate the reduction and dissolution of manganese and iron oxide in the ocean nodule, electrochemical reduction based on two approaches, namely, cathodic polarization and galvanic interaction, can also be considered as attractive alternatives. Galvanic leaching of ocean nodules in the presence of pyrite and pyrolusite for complete recovery of Cu, Ni and Co has been discussed. The key for successful and efficient dissolution of copper, nickel and cobalt from ocean nodules depends on prior reduction of the manganese and ferric oxides with which the above valuable nonferrous metals are interlocked. Polarization studies using a slurry electrode system indicated that maximum dissolution of iron and manganese due to electrochemical reduction occurred at negative DC potentials of -600 mV (SCE) and -1400 mV (SCE). The present work is also relevant to galvanic bioleaching of ocean nodules using autotrophic microorganisms, such as Thiobacillus ferrooxidans and T thiooxidans, which resulted in significant dissolution of copper, nickel and cobalt at the expense of microbiologically generated acids. Various electrochemical and biochemical mechanisms are outlined and the electroleaching and galvanic processes so developed are shown to yield almost complete dissolution of all metal values. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The effect of the inclusion of ceramic particles in polythene material on the response to erosion due to impingement by sand particles at three angles is investigated. It is seen that erosion resistance varies with ceramic inclusions. The work also considers the limitations posed by the system in adopting weight change measurements as a measure to follow erosive wear owing to the softer nature of the matrix material. Consequently, the investigation looks at two other experimental parameter, that can readily be measured to quantify erosion. Of the two approaches. the advantages of following wear through measuring linear dimension of the resulting crater is stressed in this work. The study also highlights the problems associated in assessing the depth of the crater as a parameter to express the extent of erosion owing to the phenomenon of material flow suggested and schematically illustrated in the work. Corroborative evidence for this flow behaviour through scanning electron microscopic studies is presented. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Benzocyclobutene (BCB) has been proposed as a board level dielectric for advanced system-on-package (SOP) module primarily due to its attractive low-loss (for RF application) and thin film (for high density wiring) properties. Realization of embedded resistors on low loss benzocyclobutene (dielectric loss ~0.0008 at > 40 GHz) has been explored in this study. Two approaches, viz, foil transfer and electroless plating have been attempted for deposition of thin film resistors on benzocyclobutene (BCB). Ni-P alloys were plated using conventional electroless plating, and NiCr and NiCrAlSi foils were used for the foil transfer process. This paper reports NiP and NiWP electroless plated embedded resistors on BCB dielectric for the first time in the literature
Resumo:
Electron beam irradiation induced, bending of Iron filled, multiwalled carbon nanotubes is reported. Bending of both the carbon nanotube and the Iron contained within the core was achieved using two approaches with the aid of a high resolution electron microscope (HRTEM). In the first approach, bending of the nanotube structure results in response to the irradiation of a pristine kink defect site, while in the second approach, disordered sites induce bending by focusing the electron beam on the graphite walls. The HRTEM based in situ observations demonstrate the potential for using electron beam irradiation to investigate and manipulate the physical properties of confined nanoscale structures. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. doi:10.1063/1.3688083]
Resumo:
The financial crisis set off by the default of Lehman Brothers in 2008 leading to disastrous consequences for the global economy has focused attention on regulation and pricing issues related to credit derivatives. Credit risk refers to the potential losses that can arise due to the changes in the credit quality of financial instruments. These changes could be due to changes in the ratings, market price (spread) or default on contractual obligations. Credit derivatives are financial instruments designed to mitigate the adverse impact that may arise due to credit risks. However, they also allow the investors to take up purely speculative positions. In this article we provide a succinct introduction to the notions of credit risk, the credit derivatives market and describe some of the important credit derivative products. There are two approaches to pricing credit derivatives, namely the structural and the reduced form or intensity-based models. A crucial aspect of the modelling that we touch upon briefly in this article is the problem of calibration of these models. We hope to convey through this article the challenges that are inherent in credit risk modelling, the elegant mathematics and concepts that underlie some of the models and the importance of understanding the limitations of the models.