820 resultados para Scarcity of available alternatives
Resumo:
An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
Lead acid batteries are used in hybrid vehicles and telecommunications power supply. For reliable operation of these systems, an indication of state of charge of battery is essential. To determine the state of charge of battery, current integration method combined with open circuit voltage, is being implemented. To reduce the error in the current integration method the dependence of available capacity as a function of discharge current is determined. The current integration method is modified to incorporate this factor. The experimental setup built to obtain the discharge characterstics of the battery is presented.
Resumo:
The nature of the low-temperature magnetic state of polycrystalline La0.67Ca0.33Mn0.9Fe0.1O3 has been studied by magnetization, neutron diffraction, and neutron depolarization measurements. Neutron depolarization measurements indicate the existence of ferromagnetic domains with low net magnetic moments below 108 K. The substitution of Mn3+ by Fe3+ reduces the number of available hopping sites for the Mn e(g) (up) electron and suppresses the double exchange, resulting in the reduction of ferromagnetic exchange. The competition between the ferromagnetic double-exchange interactions and the coexisting antiferromagnetic superexchange interactions and its randomness due to random substitutions of Mn3+ with Fe3+ drive the system into a randomly canted ferromagnetic state at low temperatures.
Resumo:
Uracil N-glycosylase (Ung) is the most thoroughly studied of the group of uracil DNA-glycosylase (UDG) enzymes that catalyse the first step in the uracil excision-repair pathway. The overall structure of the enzyme from Mycobacterium tuberculosis is essentially the same as that of the enzyme from other sources. However, differences exist in the N- and C-terminal stretches and some catalytic loops. Comparison with appropriate structures indicate that the two-domain enzyme closes slightly when binding to DNA, while it opens slightly when binding to the proteinaceous inhibitor Ugi. The structural changes in the catalytic loops on complexation reflect the special features of their structure in the mycobacterial protein. A comparative analysis of available sequences of the enzyme from different sources indicates high conservation of amino-acid residues in the catalytic loops. The uracil-binding pocket in the structure is occupied by a citrate ion. The interactions of the citrate ion with the protein mimic those of uracil, in addition to providing insights into other possible interactions that inhibitors could be involved in.
Resumo:
Polypyrrole (PPy) - multiwalled carbonnanotubes (MWCNT) nanocomposites with various MWCNT loading were prepared by in situ inversion emulsion polymerization technique. High loading of the nano filler were evaluated because of available inherent high interface area for charge separation in the nanocomposites. Solution processing of these conducting polymer nanocomposites is difficult because, most of them are insoluble in organic solvents. Device quality films of these composites were prepared by using pulsed laser deposition techniques (PLD). Comparative study of X-ray photoelectron spectroscopy (XPS) of bulk and film show that there is no chemical modification of polymer on ablation with laser. TEM images indicate PPy layer on MWCNT surface. SEM micrographs indicate that the MWCNT's are distributed throughout the film. It was observed that MWCNT in the composite held together by polymer matrix. Further more MWCNT diameter does not change from bulk to film indicating that the polymer layer remains intact during ablation. Even for very high loadings (80 wt.% of MWCNT's) of nanocomposites device quality films were fabricated, indicating laser ablation is a suitable technique for fabrication of device quality films. Conductivity of both bulk and films were measured using collinear four point probe setup. It was found that overall conductivity increases with increase in MWCNT loading. Comparative study of thickness with conductivity indicates that maximum conductivity was observed around 0.2 mu m. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Floating in the air that surrounds us is a number of small particles, invisible to the human eye. The mixture of air and particles, liquid or solid, is called an aerosol. Aerosols have significant effects on air quality, visibility and health, and on the Earth's climate. Their effect on the Earth's climate is the least understood of climatically relevant effects. They can scatter the incoming radiation from the Sun, or they can act as seeds onto which cloud droplets are formed. Aerosol particles are created directly, by human activity or natural reasons such as breaking ocean waves or sandstorms. They can also be created indirectly as vapors or very small particles are emitted into the atmosphere and they combine to form small particles that later grow to reach climatically or health relevant sizes. The mechanisms through which those particles are formed is still under scientific discussion, even though this knowledge is crucial to make air quality or climate predictions, or to understand how aerosols will influence and will be influenced by the climate's feedback loops. One of the proposed mechanisms responsible for new particle formation is ion-induced nucleation. This mechanism is based on the idea that newly formed particles were ultimately formed around an electric charge. The amount of available charges in the atmosphere varies depending on radon concentrations in the soil and in the air, as well as incoming ionizing radiation from outer space. In this thesis, ion-induced nucleation is investigated through long-term measurements in two different environments: in the background site of Hyytiälä and in the urban site that is Helsinki. The main conclusion of this thesis is that ion-induced nucleation generally plays a minor role in new particle formation. The fraction of particles formed varies from day to day and from place to place. The relative importance of ion-induced nucleation, i.e. the fraction of particles formed through ion-induced nucleation, is bigger in cleaner areas where the absolute number of particles formed is smaller. Moreover, ion-induced nucleation contributes to a bigger fraction of particles on warmer days, when the sulfuric acid and water vapor saturation ratios are lower. This analysis will help to understand the feedbacks associated with climate change.
Resumo:
An attempt is made in this paper to arrive at a methodology for generating building technologies appropriate to rural housing. An evaluation of traditional modern' technologies currently in use reveals the need for alternatives. The lacunae in the presently available technologies also lead to a definition of rural housing needs. It is emphasised that contending technologies must establish a 'goodness of fit' between the house form and the pattern of needs. A systems viewpoint which looks at the dynamic process of building construction and the static structure of the building is then suggested as a means to match the technologies to the needs. The process viewpoint emphasises the role of building materials production and transportation in achieving desired building performances. A couple of examples of technological alternatives like the compacted soil block and the polythene-stabilised soil roof covering are then discussed. The static structural system viewpoint is then studied to arrive at methodologies of cost reduction. An illustrative analysis is carried out using the dynamic programming technique, to arrive at combinations of alternatives for the building components which lead to cost reduction. Some of the technological options are then evaluated against the need patterns. Finally, a guideline for developments in building technology is suggested
Resumo:
Three different complexes of copper (I) with bridging 1, 2-bis(diphenylphosphino)ethane (dppe), namely [Cu2 (mu-dppe) (CH3CN)6] (ClO4)2 (1), [Cu2 (mu-dppe)2 (CH3 CN)2] (ClO4)2 (2), and [Cu2 (mu-dppe) (dppe)2 (CH3CN)2] (ClO4)2 (3) have been prepared. The structure of [Cu2 (mu-dppe) (dPPe)2 (CH3CH)2] (ClO4)2 has been determined by X-ray crystallography. It crystallizes in the space group PT with a=12.984(6) angstrom, b=13.180(6) angstrom, c=14.001(3) angstrom, alpha=105.23(3), beta=105.60(2), gamma=112.53 (4), V=1944 (3) angstrom3, and Z=1. The structure was refined by least-squares method with R=0.0365; R(w)=0.0451 for 6321 reflections with F0 greater-than-or-equal-to 3 sigma (F0). The CP/MAS P-31 and IR spectra of the complexes have been analysed in the light of available crystallographic data. IR spectroscopy is particularly helpful in identifying the presence of chelating dppe. P-31 chemical shifts observed in solid state are very different from those observed in solution, and change significantly with slight changes in structure. In solution, complex 1 remains undissociated but complexes 2 and 3 undergo extensive dissociation. With a combination of room temperature H-1, Cu-63, and variable temperature P-31 NMR spectra, it is possible to understand the various processes occurring in solution.
Resumo:
Channel assignment in multi-channel multi-radio wireless networks poses a significant challenge due to scarcity of number of channels available in the wireless spectrum. Further, additional care has to be taken to consider the interference characteristics of the nodes in the network especially when nodes are in different collision domains. This work views the problem of channel assignment in multi-channel multi-radio networks with multiple collision domains as a non-cooperative game where the objective of the players is to maximize their individual utility by minimizing its interference. Necessary and sufficient conditions are derived for the channel assignment to be a Nash Equilibrium (NE) and efficiency of the NE is analyzed by deriving the lower bound of the price of anarchy of this game. A new fairness measure in multiple collision domain context is proposed and necessary and sufficient conditions for NE outcomes to be fair are derived. The equilibrium conditions are then applied to solve the channel assignment problem by proposing three algorithms, based on perfect/imperfect information, which rely on explicit communication between the players for arriving at an NE. A no-regret learning algorithm known as Freund and Schapire Informed algorithm, which has an additional advantage of low overhead in terms of information exchange, is proposed and its convergence to the stabilizing outcomes is studied. New performance metrics are proposed and extensive simulations are done using Matlab to obtain a thorough understanding of the performance of these algorithms on various topologies with respect to these metrics. It was observed that the algorithms proposed were able to achieve good convergence to NE resulting in efficient channel assignment strategies.
Resumo:
his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.
Resumo:
We develop a Gaussian mixture model (GMM) based vector quantization (VQ) method for coding wideband speech line spectrum frequency (LSF) parameters at low complexity. The PDF of LSF source vector is modeled using the Gaussian mixture (GM) density with higher number of uncorrelated Gaussian mixtures and an optimum scalar quantizer (SQ) is designed for each Gaussian mixture. The reduction of quantization complexity is achieved using the relevant subset of available optimum SQs. For an input vector, the subset of quantizers is chosen using nearest neighbor criteria. The developed method is compared with the recent VQ methods and shown to provide high quality rate-distortion (R/D) performance at lower complexity. In addition, the developed method also provides the advantages of bitrate scalability and rate-independent complexity.
Resumo:
Automated synthesis of mechanical designs is an important step towards the development of an intelligent CAD system. Research into methods for supporting conceptual design using automated synthesis has attracted much attention in the past decades. The research work presented here is based on the processes of synthesizing multiple state mechanical devices carried out individually by ten engineering designers. The designers are asked to think aloud, while carrying out the synthesis. The ten design synthesis processes are video recorded, and the records are transcribed and coded for identifying activities occurring in the synthesis processes, as well as for identifying the inputs to and outputs from the activities. A mathematical representation for specifying multi-state design task is proposed. Further, a descriptive model capturing all the ten synthesis processes is developed and presented in this paper. This will be used to identify the outstanding issues to be resolved before a system for supporting design synthesis of multiple state mechanical devices that is capable of creating a comprehensive variety of solution alternatives could be developed.