857 resultados para Approach through a game
Resumo:
Much of the Bangalore sewage is treated in three streams namely Bellandur (K&C Valley),Vrishabhavati and Hebbal-Nagavara stream systems. Among these it is estimated that out of a total of about 500MLD of partially treated sewage is let into the Bellandur tank. We estimate that a total of about 77t N non-industrial anthropogenic nitrogen efflux (mainly urine and excreta) in Bangalore city. This is distributed between that handled by the three sewage streams, soak-pits and land deposition. About 17-24.5t N enters the Bellandur tank daily. This has been happening over few decades and our observations suggest that this approximately 380ha tank is functioning as a C and N removal system with reasonable efficiency. The ammoniacal and nitrate nitrogen content of the water at the discharge points were estimated and found that over 80% of the nitrogen influx and over 75% of the C influx is removed by this tank system. We observed that there are three nitrogen sinks namely bacterial, micro-algal and macrophytes. The micro-algal fraction is dominated by Microcystis and Euglenophyceae members and they appear to constitute a significant fraction. Water hyacinth represents the single largest representative of the macrophytes. This tank has been functioning in this manner for over three decades. We attempt to study this phenomenon from a material balance approach and show that it is functioning with a reasonable degree of satisfaction as a natural wetland. As the population served and concomitant influx into this wetland increases, there is a potential for the system to be overloaded and to collapse. Therefore a better understanding of its function and the need for maintenance is discussed in the paper.
Resumo:
We consider a framework in which several service providers offer downlink wireless data access service in a certain area. Each provider serves its end-users through opportunistic secondary spectrum access of licensed spectrum, and needs to pay primary license holders of the spectrum usage based and membership based charges for such secondary spectrum access. In these circumstances, if providers pool their resources and allow end-users to be served by any of the cooperating providers, the total user satisfaction as well as the aggregate revenue earned by providers may increase. We use coalitional game theory to investigate such cooperation among providers, and show that the optimal cooperation schemes can be obtained as solutions of convex optimizations. We next show that under usage based charging scheme, if all providers cooperate, there always exists an operating point that maximizes the aggregate revenue of providers, while presenting each provider a share of the revenue such that no subset of providers has an incentive to leave the coalition. Furthermore, such an operating point can be computed in polynomial time. Finally, we show that when the charging scheme involves membership based charges, the above result holds in important special cases.
Resumo:
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.
Resumo:
Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.
Resumo:
A modified lattice model using finite element method has been developed to study the mode-I fracture analysis of heterogeneous materials like concrete. In this model, the truss members always join at points where aggregates are located which are modeled as plane stress triangular elements. The truss members are given the properties of cement mortar matrix randomly, so as to represent the randomness of strength in concrete. It is widely accepted that the fracture of concrete structures should not be based on strength criterion alone, but should be coupled with energy criterion. Here, by incorporating the strain softening through a parameter ‘α’, the energy concept is introduced. The softening branch of load-displacement curves was successfully obtained. From the sensitivity study, it was observed that the maximum load of a beam is most sensitive to the tensile strength of mortar. It is seen that by varying the values of properties of mortar according to a normal random distribution, better results can be obtained for load-displacement diagram.
Resumo:
This research shows a new approach and development of a design methodology, based on the perspective of meanings. In this study the design process is explored as a development of the structure of meanings. The processes of search and evaluation of meanings form the foundations of developing this structure. In order to facilitate the use and operation of the meanings, the WordNet lexical database and an existing visualization of WordNet — Visuwords — is used for the process of meaning search. The basic tool used for evaluation process is the WordNet::Similarity software, measuring the relatedness of meanings in the database. In this way it is measuring the degree of interconnections between different meanings. This kind of search and evaluation techniques are later on incorporated into our methodology of the structure of meanings to support the design process. The measures of relatedness of meanings are developed as convergence criteria for application in the processes of evaluation. Further on, the methodology for the structure of meanings developed here is used to construct meanings in a verification of product design. The steps of the design methodology, including the search and evaluation processes involved in developing the structure of the meanings, are elucidated. The choices, made by the designer in terms of meanings are supported by consequent searches and evaluations of meanings to be implemented in the designed product. In conclusion, the paper presents directions for developing and further extensions of the proposed design methodology.
Resumo:
For high performance aircrafts, the flight control system needs to be quite effective in both assuring accurate tracking of pilot commands, while simultaneously assuring overall stability of the aircraft. In addition, the control system must also be sufficiently robust to cater to possible parameter variations. The primary aim of this paper is to enhance the robustness of the controller for a HPA using neuro-adaptive control design. Here the architecture employs a network of Gaussian Radial basis functions to adaptively compensate for the ignored system dynamics. A stable weight mechanism is determined using Lyapunov theory. The network construction and performance of the resulting controller are illustrated through simulations with a low-fidelity six –DOF model of F16 that is available in open literature.
Resumo:
Wetlands are the most productive and biologically diverse but very fragile ecosystems. They are vulnerable to even small changes in their biotic and abiotic factors. In recent years, there has been concern over the continuous degradation of wetlands due to unplanned developmental activities. This necessitates inventorying, mapping, and monitoring of wetlands to implement sustainable management approaches. The principal objective of this work is to evolve a strategy to identify and monitor wetlands using temporal remote sensing (RS) data. Pattern classifiers were used to extract wetlands automatically from NIR bands of MODIS, Landsat MSS and Landsat TM remote sensing data. MODIS provided data for 2002 to 2007, while for 1973 and 1992 IR Bands of Landsat MSS and TM (79m and 30m spatial resolution) data were used. Principal components of IR bands of MODIS (250 m) were fused with IRS LISS-3 NIR (23.5 m). To extract wetlands, statistical unsupervised learning of IR bands for the respective temporal data was performed using Bayesian approach based on prior probability, mean and covariance. Temporal analysis of wetlands indicates a sharp decline of 58% in Greater Bangalore attributing to intense urbanization processes, evident from a 466% increase in built-up area from 1973 to 2007.
Resumo:
In this paper, we propose and analyze a novel idea of performing interference cancellation (IC) in a distributed/cooperative manner, with a motivation to provide multiuser detection (MUD) benefit to nodes that have only a single user detection capability. In the proposed distributed interference cancellation (DIC) scheme, during phase-1 of transmission, an MUD capable cooperating relay node estimates all the sender nodes' bits through multistage interference cancellation. These estimated bits are then sent by the relay node on orthogonal tones in phase-2 of transmission. The destination nodes receive these bit estimates and use them for interference estimation/cancellation, thus achieving IC benefit in a distributed manner. For this DIC scheme, we analytically derive an exact expression for the bit error rate (BER) in a basic five-node network (two source-destination node pairs and a cooperating relay node) on AWGN channels. Analytical BER results are shown to match with simulation results. For more general system scenarios, including more than two source-destination pairs and fading channels without and with space-time coding, we present simulation results to establish the potential for improved performance in the proposed distributed approach to interference cancellation. We also present a linear version of the proposed DIC.
Resumo:
Engineering education quality embraces the activities through which a technical institution satisfies itself that the quality of education it provides and standards it has set are appropriate and are being maintained. There is a need to develop a standardised approach to most aspects of quality assurance for engineering programmes which is sufficiently well defined to be accepted for all assessments.We have designed a Technical Educational Quality Assurance and Assessment (TEQ-AA) System, which makes use of the information on the web and analyzes the standards of the institution. With the standards as anchors for definition, the institution is clearer about its present in order to plan better for its future and enhancing the level of educational quality.The system has been tested and implemented on the technical educational Institutions in the Karnataka State which usually host their web pages for commercially advertising their technical education programs and their Institution objectives, policies, etc., for commercialization and for better reach-out to the students and faculty. This helps in assisting the students in selecting an institution for study and to assist in employment.
Resumo:
Clustering techniques are used in regional flood frequency analysis (RFFA) to partition watersheds into natural groups or regions with similar hydrologic responses. The linear Kohonen's self‐organizing feature map (SOFM) has been applied as a clustering technique for RFFA in several recent studies. However, it is seldom possible to interpret clusters from the output of an SOFM, irrespective of its size and dimensionality. In this study, we demonstrate that SOFMs may, however, serve as a useful precursor to clustering algorithms. We present a two‐level. SOFM‐based clustering approach to form regions for FFA. In the first level, the SOFM is used to form a two‐dimensional feature map. In the second level, the output nodes of SOFM are clustered using Fuzzy c‐means algorithm to form regions. The optimal number of regions is based on fuzzy cluster validation measures. Effectiveness of the proposed approach in forming homogeneous regions for FFA is illustrated through application to data from watersheds in Indiana, USA. Results show that the performance of the proposed approach to form regions is better than that based on classical SOFM.
Resumo:
Factors influencing the effectiveness of democratic institutions and to that effect processes involved at the local governance level have been the interest in the literature, given the presence of various advocacies and networks that are context-specific. This paper is motivated to understand the adaptability issues related to governance given these complexities through a comparative analysis of diversified regions. We adopted a two-stage clustering along with regression methodology for this purpose. The results show that the formation of advocacies and networks depends on the context and institutional framework. The paper concludes by exploring different strategies and dynamics involved in network governance and insists on the importance of governing the networks for structural reformation through regional policy making.
Resumo:
A new technique is presented using principles of multisignal relaying for the synthesis of a universal-type quadrilateral polar characteristic. The modus operandi consists in the determination of the phase sequence of a set of voltage phasors and the provision of a trip signal for one sequence while blocking for the other. Two versions, one using ferrite-core logic and another using transistor logic, are described in detail. The former version has the merit of simplicity and has the added advantage of not requiring any d.c. supply. The unit is flexible, as it permits independent control of the characteristic along the resistance and reactance axis through suitable adjustments of replica impedance angles. The maximum operating time is about 20ms for all switching angles, and with faults within 95% of the protected section. The maximum transient overreach is about 8%.
Resumo:
The transmission loss (TL) performance of spherical chambers having single inlet and multiple outlet is obtained analytically through modal expansion of acoustic field inside the spherical cavity in terms of the spherical Bessel functions and Legendre polynomials. The uniform piston driven model based upon the impedance [Z] matrix is used to characterize the multi-port spherical chamber. It is shown analytically that the [Z] parameters are independent of the azimuthal angle (phi) due to the axisymmetric shape of the sphere; rather, they depend only upon the polar angle (theta) and radius of the chamber R(0). Thus, the effects of relative polar angular location of the ports and number of outlet ports are investigated. The analytical results are shown to be in good agreement with the 3D FEA results, thereby validating the procedure suggested in this work.