15 resultados para Job opportunities

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cooking efficiency and related fuel economy issues have been studied in a particular rural area of India. Following a description of the cooking practices and conditions in this locale, cooking efficiency is examined. A cooking efficiency of only 6% was found. The use of aluminium rather than clay pots results in an increased efficiency. In addition, cooking efficiency correlates very well with specific fuel consumption. The latter parameter is much simpler to analyse than cooking efficiency. The energy losses during cooking are examined in the second part of this case study. The major energy losses are heating of excess air, heat carried away by the combustion products, heat transmitted to the stove body and floor, and the chemical energy in charcoal residue. The energy loss due to the evaporation of cooking water is also significant because it represents about one-third of the heat reaching the pots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Batch Processing Machine (BPM) is one which processes a number of jobs simultaneously as a batch with common beginning and ending times. Also, a BPM, once started cannot be interrupted in between (Pre-emption not allowed). This research is motivated by a BPM in steel casting industry. There are three main stages in any steel casting industry viz., pre-casting stage, casting stage and post-casting stage. A quick overview of the entire process, is shown in Figure 1. There are two BPMs : (1) Melting furnace in the pre-casting stage and (2) Heat Treatment Furnace (HTF) in the post casting stage of steel casting manufacturing process. This study focuses on scheduling the latter, namely HTF. Heat-treatment operation is one of the most important stages of steel casting industries. It determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. In general, different types of castings have to undergo more than one type of heat-treatment operations, where the total heat-treatment processing times change. To have a better control, castings are primarily classified into a number of job-families based on the alloy type such as low-alloy castings and high alloy castings. For technical reasons such as type of alloy, temperature level and the expected combination of heat-treatment operations, the castings from different families can not be processed together in the same batch.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work concerns with the static scheduling of jobs to parallel identical batch processors with incompatible job families for minimizing the total weighted tardiness. This scheduling problem is applicable in burn-in operations and wafer fabrication in semiconductor manufacturing. We decompose the problem into two stages: batch formation and batch scheduling, as in the literature. The Ant Colony Optimization (ACO) based algorithm called ATC-BACO algorithm is developed in which ACO is used to solve the batch scheduling problems. Our computational experimentation shows that the proposed ATC-BACO algorithm performs better than the available best traditional dispatching rule called ATC-BATC rule.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of matching people to jobs, where each person ranks a subset of jobs in an order of preference, possibly involving ties. There are several notions of optimality about how to best match each person to a job; in particular, popularity is a natural and appealing notion of optimality. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not adroit popular matchings. This motivates the following extension of the popular rnatchings problem:Given a graph G; = (A boolean OR J, E) where A is the set of people and J is the set of jobs, and a list < c(1), c(vertical bar J vertical bar)) denoting upper bounds on the capacities of each job, does there exist (x(1), ... , x(vertical bar J vertical bar)) such that setting the capacity of i-th, job to x(i) where 1 <= x(i) <= c(i), for each i, enables the resulting graph to admit a popular matching. In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c is 1 or 2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of minimizing the total completion time on a single batch processing machine. The set of jobs to be scheduled can be partitioned into a number of families, where all jobs in the same family have the same processing time. The machine can process at most B jobs simultaneously as a batch, and the processing time of a batch is equal to the processing time of the longest job in the batch. We analyze that properties of an optimal schedule and develop a dynamic programming algorithm of polynomial time complexity when the number of job families is fixed. The research is motivated by the problem of scheduling burn-in ovens in the semiconductor industry

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change is projected to lead to shift of forest types leading to irreversible damage to forests by rendering several species extinct and potentially affecting the livelihoods of local communities and the economy. Approximately 47% and 42% of tropical dry deciduous grids are projected to undergo shifts under A2 and B2 SRES scenarios respectively, as opposed to less than 16% grids comprising of tropical wet evergreen forests. Similarly, the tropical thorny scrub forest is projected to undergo shifts in majority of forested grids under A2 (more than 80%) as well as B2 scenarios (50% of grids). Thus the forest managers and policymakers need to adapt to the ecological as well as the socio-economic impacts of climate change. This requires formulation of effective forest management policies and practices, incorporating climate concerns into long-term forest policy and management plans. India has formulated a large number of innovative and progressive forest policies but a mechanism to ensure effective implementation of these policies is needed. Additional policies and practices may be needed to address the impacts of climate change. This paper discusses an approach and steps involved in the development of an adaptation framework as well as policies, strategies and practices needed for mainstreaming adaptation to cope with projected climate change. Further, the existing barriers which may affect proactive adaptation planning given the scale, accuracy and uncertainty associated with assessing climate change impacts are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We review the current status of various aspects of biopolymer translocation through nanopores and the challenges and opportunities it offers. Much of the interest generated by nanopores arises from their potential application to third-generation cheap and fast genome sequencing. Although the ultimate goal of single-nucleotide identification has not yet been reached, great advances have been made both from a fundamental and an applied point of view, particularly in controlling the translocation time, fabricating various kinds of synthetic pores or genetically engineering protein nanopores with tailored properties, and in devising methods (used separately or in combination) aimed at discriminating nucleotides based either on ionic or transverse electron currents, optical readout signatures, or on the capabilities of the cellular machinery. Recently, exciting new applications have emerged, for the detection of specific proteins and toxins (stochastic biosensors), and for the study of protein folding pathways and binding constants of protein-protein and protein-DNA complexes. The combined use of nanopores and advanced micromanipulation techniques involving optical/magnetic tweezers with high spatial resolution offers unique opportunities for improving the basic understanding of the physical behavior of biomolecules in confined geometries, with implications for the control of crucial biological processes such as protein import and protein denaturation. We highlight the key works in these areas along with future prospects. Finally, we review theoretical and simulation studies aimed at improving fundamental understanding of the complex microscopic mechanisms involved in the translocation process. Such understanding is a pre-requisite to fruitful application of nanopore technology in high-throughput devices for molecular biomedical diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade, there has been a tremendous interest in Graphene transistors. The greatest advantage for CMOS nanoelectronics applications is the fact that Graphene is compatible with planar CMOS technology and potentially offers excellent short channel properties. Because of the zero bandgap, it will not be possible to turn off the MOSFET efficiently and hence the typical on current to off current ratio (Ion/Ioff) has been less than 10. Several techniques have been proposed to open the bandgap in Graphene. It has been demonstrated, both theoretically and experimentally, that Graphene Nanoribbons (GNR) show a bandgap which is inversely proportional to their width. GNRs with about 20 nm width have bandgaps in the range of 100meV. But it is very difficult to obtain GNRs with well defined edges. An alternate technique to open the band gap is to use bilayer Graphene (BLG), with an asymmetric bias applied in the direction perpendicular to their plane. Another important CMOS metric, the subthreshold slope is also limited by the inability to turn off the transistor. However, these devices could be attractive for RF CMOS applications. But even for analog and RF applications the non-saturating behavior of the drain current can be an issue. Although some studies have reported current saturation, the mechanisms are still not very clear. In this talk we present some of our recent findings, based on simulations and experiments, and propose possible solutions to obtain high on current to off current ratio. A detailed study on high field transport in grapheme transistors, relevant for analog and RF applications will also be presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The twin demands of energy-efficiency and higher performance on DRAM are highly emphasized in multicore architectures. A variety of schemes have been proposed to address either the latency or the energy consumption of DRAMs. These schemes typically require non-trivial hardware changes and end up improving latency at the cost of energy or vice-versa. One specific DRAM performance problem in multicores is that interleaved accesses from different cores can potentially degrade row-buffer locality. In this paper, based on the temporal and spatial locality characteristics of memory accesses, we propose a reorganization of the existing single large row-buffer in a DRAM bank into multiple sub-row buffers (MSRB). This re-organization not only improves row hit rates, and hence the average memory latency, but also brings down the energy consumed by the DRAM. The first major contribution of this work is proposing such a reorganization without requiring any significant changes to the existing widely accepted DRAM specifications. Our proposed reorganization improves weighted speedup by 35.8%, 14.5% and 21.6% in quad, eight and sixteen core workloads along with a 42%, 28% and 31% reduction in DRAM energy. The proposed MSRB organization enables opportunities for the management of multiple row-buffers at the memory controller level. As the memory controller is aware of the behaviour of individual cores it allows us to implement coordinated buffer allocation schemes for different cores that take into account program behaviour. We demonstrate two such schemes, namely Fairness Oriented Allocation and Performance Oriented Allocation, which show the flexibility that memory controllers can now exploit in our MSRB organization to improve overall performance and/or fairness. Further, the MSRB organization enables additional opportunities for DRAM intra-bank parallelism and selective early precharging of the LRU row-buffer to further improve memory access latencies. These two optimizations together provide an additional 5.9% performance improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address a scheduling problem for minimizing total weighted tardiness. The background for the paper is derived from the automobile gear manufacturing process. We consider the bottleneck operation of heat treatment stage of gear manufacturing. Real-life scenarios like unequal release times, incompatible job families, nonidentical job sizes, heterogeneous batch processors, and allowance for job splitting have been considered. We have developed a mathematical model which takes into account dynamic starting conditions. The problem considered in this study is NP-hard in nature, and hence heuristic algorithms have been proposed to address it. For real-life large-size problems, the performance of the proposed heuristic algorithms is evaluated using the method of estimated optimal solution available in literature. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently obtaining near-optimal statistically estimated solutions in very reasonable computational time.