14 resultados para ethics in practice
em Indian Institute of Science - Bangalore - Índia
Resumo:
We study the performance of greedy scheduling in multihop wireless networks where the objective is aggregate utility maximization. Following standard approaches, we consider the dual of the original optimization problem. Optimal scheduling requires selecting independent sets of maximum aggregate price, but this problem is known to be NP-hard. We propose and evaluate a simple greedy heuristic. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
Resumo:
Test results of 24 reinforced concrete wall panels in one-way in-plane action are presented. The panels were loaded at a small eccentricity to reflect possible eccentric loading in practice. Influences of slenderness ratio, aspect ratio, vertical steel, and horizontal steel on the ultimate load are studied. An empirical equation modifying two existing methods is proposed for the prediction of ultimate load. The modified equation includes the effects of slenderness ratio, amount of vertical steel, and aspect ratio. The results predicted by the proposed modified method and five other available equations are compared with 48 test data. The proposed modified equation is found to be satisfactory and, additionally, includes the effect of aspect ratio which is not present in other methods.
Resumo:
One of the key problems in the design of any incompletely connected multiprocessor system is to appropriately assign the set of tasks in a program to the Processing Elements (PEs) in the system. The task assignment problem has proven difficult both in theory and in practice. This paper presents a simple and efficient heuristic algorithm for assigning program tasks with precedence and communication constraints to the PEs in a Message-based Multiple-bus Multiprocessor System, M3, so that the total execution time for the program is minimized. The algorithm uses a cost function: “Minimum Distance and Parallel Transfer” to minimize the completion time. The effectiveness of the algorithm has been demonstrated by comparing the results with (i) the lower bound on the execution time of a program (task) graph and (ii) a random assignment.
Resumo:
A reduction in the heat losses from the top of the gas holder of a biogas plant has been achieved by the simple device of a transparent cover. The heat losses thus prevented have been deployed to heat a water pond formed on the roof of the gas holder. This solar-heated water is mixed with the organic input for ‘ hot-charging ’ of the biogas plant. A thermal analysis of such a solar water-heater ‘ piggy-backing ’ on the gas holder of a biogas plant has been carried out.To test whether the advantages indicated by the thermal analysis can be realised in practice, a biogas plant of the ASTRA design was modified to incorporate a roof-top solar water-heater. The operation of such a modified plant, even under ‘ worst case ’ onditions, shows a significant improvement in the gas yield compared to the unmodified plant. Hence, the innovation reported here may lead to drastic reductions in the sizes and therefore costs of biogas plants. By making the transparent cover assume a tent-shape, the roof-top solar heater can serve the additional function of a solar still to yield distilled water. The biogas plant-cum-solar water-heater-cum-solar still described here is an example of a spatially integrated hybrid device which is extremely cost-effective.
Resumo:
Test results of 24 reinforced concrete wall panels in two-way action (i.e., supported on all the four sides) and subjected to in-plane vertical load are presented. The load is applied at an eccentricity to represent possible accidental eccentricity that occurs in practice due to constructional imperfections. Influences of aspect ratio, thinness ratio, slendemess ratio, vertical steel, and horizontal steel on the ultimate load are studied. Two equations are proposed to predict the ultimate load carried by the panels. The first equation is empirical and is arrived at from trial and error fitting with test data. The second equation is semi-empirical and is developed from a modification of the buckling strength of thin rectangular plates. Both the equations are formulated so as to give a safe prediction of a large portion of ultimate strength test results. Also, ultimate load cracking load and lateral deflections of identical panels in two-way action (all four sides supported) and oneway action (top and bottom sides only supported) are compared.
Resumo:
We know, from the classical work of Tarski on real closed fields, that elimination is, in principle, a fundamental engine for mechanized deduction. But, in practice, the high complexity of elimination algorithms has limited their use in the realization of mechanical theorem proving. We advocate qualitative theorem proving, where elimination is attractive since most processes of reasoning take place through the elimination of middle terms, and because the computational complexity of the proof is not an issue. Indeed what we need is the existence of the proof and not its mechanization. In this paper, we treat the linear case and illustrate the power of this paradigm by giving extremely simple proofs of two central theorems in the complexity and geometry of linear programming.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Control of flow in duct networks has a myriad of applications ranging from heating, ventilation, and air-conditioning to blood flow networks. The system considered here provides vent velocity inputs to a novel 3-D wind display device called the TreadPort Active Wind Tunnel. An error-based robust decentralized sliding-mode control method with nominal feedforward terms is developed for individual ducts while considering cross coupling between ducts and model uncertainty as external disturbances in the output. This approach is important due to limited measurements, geometric complexities, and turbulent flow conditions. Methods for resolving challenges such as turbulence, electrical noise, valve actuator design, and sensor placement are presented. The efficacy of the controller and the importance of feedforward terms are demonstrated with simulations based upon an experimentally validated lumped parameter model and experiments on the physical system. Results show significant improvement over traditional control methods and validate prior assertions regarding the importance of decentralized control in practice.
Resumo:
We study the trade-off between delivery delay and energy consumption in a delay tolerant network in which a message (or a file) has to be delivered to each of several destinations by epidemic relaying. In addition to the destinations, there are several other nodes in the network that can assist in relaying the message. We first assume that, at every instant, all the nodes know the number of relays carrying the packet and the number of destinations that have received the packet. We formulate the problem as a controlled continuous time Markov chain and derive the optimal closed loop control (i.e., forwarding policy). However, in practice, the intermittent connectivity in the network implies that the nodes may not have the required perfect knowledge of the system state. To address this issue, we obtain an ODE (i.e., fluid) approximation for the optimally controlled Markov chain. This fluid approximation also yields an asymptotically optimal open loop policy. Finally, we evaluate the performance of the deterministic policy over finite networks. Numerical results show that this policy performs close to the optimal closed loop policy.
Resumo:
An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.
Resumo:
We study the tradeoff between delivery delay and energy consumption in a delay-tolerant network in which a message (or a file) has to be delivered to each of several destinations by epidemic relaying. In addition to the destinations, there are several other nodes in the network that can assist in relaying the message. We first assume that, at every instant, all the nodes know the number of relays carrying the message and the number of destinations that have received the message. We formulate the problem as a controlled continuous-time Markov chain and derive the optimal closed-loop control (i.e., forwarding policy). However, in practice, the intermittent connectivity in the network implies that the nodes may not have the required perfect knowledge of the system state. To address this issue, we obtain an ordinary differential equation (ODE) (i.e., a deterministic fluid) approximation for the optimally controlled Markov chain. This fluid approximation also yields an asymptotically optimal open-loop policy. Finally, we evaluate the performance of the deterministic policy over finite networks. Numerical results show that this policy performs close to the optimal closed-loop policy.
Resumo:
The standard Q criterion (with Q > 1) describes the stability against local, axisymmetric perturbations in a disk supported by rotation and random motion. Most astrophysical disks, however, are under the influence of an external gravitational potential, which can significantly affect their stability. A typical example is a galactic disk embedded in a dark matter halo. Here, we do a linear perturbation analysis for a disk in an external potential and obtain a generalized dispersion relation and the effective stability criterion. An external potential, such as that due to the dark matter halo concentric with the disk, contributes to the unperturbed rotational field and significantly increases its stability. We obtain the values for the effective Q parameter for the Milky Way and for a low surface brightness galaxy, UGC 7321. We find that in each case the stellar disk by itself is barely stable and it is the dark matter halo that stabilizes the disk against local, axisymmetric gravitational instabilities. Thus, the dark matter halo is necessary to ensure local disk stability. This result has been largely missed so far because in practice the Q parameter for a galactic disk is obtained using the observed rotational field that already includes the effect of the halo
Resumo:
standard Q criterion (with Q > 1) describes the stability against local, axisymmetric perturbations in a disk supported by rotation and random motion. Most astrophysical disks, however, are under the influence of an external gravitational potential, which can significantly affect their stability. A typical example is a galactic disk embedded in a dark matter halo. Here, we do a linear perturbation analysis for a disk in an external potential and obtain a generalized dispersion relation and the effective stability criterion. An external potential, such as that due to the dark matter halo concentric with the disk, contributes to the unperturbed rotational field and significantly increases its stability. We obtain the values for the effective Q parameter for the Milky Way and for a low surface brightness galaxy, UGC 7321. We find that in each case the stellar disk by itself is barely stable and it is the dark matter halo that stabilizes the disk against local, axisymmetric gravitational instabilities. Thus, the dark matter halo is necessary to ensure local disk stability. This result has been largely missed so far because in practice the Q parameter for a galactic disk is obtained using the observed rotational field that already includes the effect of the halo.
Resumo:
The inverse coupled dependence of electrical conductivity and thermopower on carrier concentration presents a big challenge in achieving a high figure of merit. However, the simultaneous enhancement of electrical conductivity and thermopower can be realized in practice by carefully engineering the electronic band structure. Here by taking the example of Bi2S3, we report a simultaneous increase in both electrical conductivity and thermopower under hydrostatic pressure. Application of hydrostatic pressure enables tuning of electronic structure in such a way that the conductivity effective mass decreases and the density of states effective mass increases. This dependence of effective masses leads to simultaneous enhancement in electrical conductivity and thermopower under n-type doping leading to a huge improvement in the power factor. Also lattice thermal conductivity exhibits very weak pressure dependence in the low pressure range. The large power factor together with low lattice thermal conductivity results in a high ZT value of 1.1 under n-type doping, which is nearly two times higher than the previously reported value. Hence, this pressure-tuned behaviour can enable the development of efficient thermoelectric devices in the moderate to high temperature range. We further demonstrate that similar enhancement can be observed by generating chemical pressure by doping Bi2S3 with smaller iso-electronic elements such as Sb at Bi sites, which can be achieved experimentally.