53 resultados para Evaluation of organizational performance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first objective of this paper is to show that a single-stage adsorption based cooling-cum-desalination system cannot be used if air cooled heat rejection is used under tropical conditions. This objective is achieved by operating a silica gel + water adsorption chiller first in a single-stage mode and then in a 2-stage mode with 2 beds/stage in each case. The second objective is to improve upon the simulation results obtained earlier by way of empirically describing the thermal wave phenomena during switching of operation of beds between adsorption and desorption and vice versa. Performance indicators, namely, cooling capacity, coefficient of performance and desalinated water output are extracted for various evaporator pressures and half cycle times. The improved simulation model is found to interpret experimental results more closely than the earlier one. Reasons for decline in performance indicators between theoretical and actual scenarios are appraised. (C) 2015 Elsevier Ltd and IIR. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurement of device current during switching characterisation of an insulated gate bipolar transistor (IGBT) requires a current sensor with low insertion impedance and high bandwidth. This study presents an experimental procedure for evaluating the performance of a coaxial current transformer (CCT), designed for the above purpose. A prototype CCT, which can be mounted directly on a power terminal of a 1200 V/50 A half-bridge IGBT module, is characterised experimentally. The measured characteristics include insertion impedance, gain and phase of the CCT at different frequencies. The bounds of linearity within which the CCT can operate without saturation are determined theoretically, and are also verified experimentally. The experimental study on linearity of the CCT requires a high-amplitude current source. A proportional-resonant (PR) controller-based current-controlled half-bridge inverter is developed for this purpose. A systematic procedure for selection of PR controller parameters is also reported in this study. This set-up is helpful to determine the limit of linearity and also to measure the frequency response of the CCT at realistic amplitudes of current in the low-frequency range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nodes with dynamicity, and management without administrator are key features of mobile ad hoc networks (1VIANETs). Increasing resource requirements of nodes running different applications, scarcity of resources, and node mobility in MANETs are the important issues to be considered in allocation of resources. Moreover, management of limited resources for optimal allocation is a crucial task. In our proposed work we discuss a design of resource allocation protocol and its performance evaluation. The proposed protocol uses both static and mobile agents. The protocol does the distribution and parallelization of message propagation (mobile agent with information) in an efficient way to achieve scalability and speed up message delivery to the nodes in the sectors of the zones of a MANET. The protocol functionality has been simulated using Java Agent Development Environment (JADE) Framework for agent generation, migration and communication. A mobile agent migrates from central resource rich node with message and navigate autonomously in the zone of network until the boundary node. With the performance evaluation, it has been concluded that the proposed protocol consumes much less time to allocate the required resources to the nodes under requirement, utilize less network resources and increase the network scalability. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerosol loading over the South Asian region has the potential to affect the monsoon rainfall, Himalayan glaciers and regional air-quality, with implications for the billions in this region. While field campaigns and network observations provide primary data, they tend to be location/season specific. Numerical models are useful to regionalize such location-specific data. Studies have shown that numerical models underestimate the aerosol scenario over the Indian region, mainly due to shortcomings related to meteorology and the emission inventories used. In this context, we have evaluated the performance of two such chemistry-transport models: WRF-Chem and SPRINTARS over an India-centric domain. The models differ in many aspects including physical domain, horizontal resolution, meteorological forcing and so on etc. Despite these differences, both the models simulated similar spatial patterns of Black Carbon (BC) mass concentration, (with a spatial correlation of 0.9 with each other), and a reasonable estimates of its concentration, though both of them under-estimated vis-a-vis the observations. While the emissions are lower (higher) in SPRINTARS (WRF-Chem), overestimation of wind parameters in WRF-Chem caused the concentration to be similar in both models. Additionally, we quantified the under-estimations of anthropogenic BC emissions in the inventories used these two models and three other widely used emission inventories. Our analysis indicates that all these emission inventories underestimate the emissions of BC over India by a factor that ranges from 1.5 to 2.9. We have also studied the model simulations of aerosol optical depth over the Indian region. The models differ significantly in simulations of AOD, with WRF-Chem having a better agreement with satellite observations of AOD as far as the spatial pattern is concerned. It is important to note that in addition to BC, dust can also contribute significantly to AOD. The models differ in simulations of the spatial pattern of mineral dust over the Indian region. We find that both meteorological forcing and emission formulation contribute to these differences. Since AOD is column integrated parameter, description of vertical profiles in both models, especially since elevated aerosol layers are often observed over Indian region, could be also a contributing factor. Additionally, differences in the prescription of the optical properties of BC between the models appear to affect the AOD simulations. We also compared simulation of sea-salt concentration in the two models and found that WRF-Chem underestimated its concentration vis-a-vis SPRINTARS. The differences in near-surface oceanic wind speeds appear to be the main source of this difference. In-spite of these differences, we note that there are similarities in their simulation of spatial patterns of various aerosol species (with each other and with observations) and hence models could be valuable tools for aerosol-related studies over the Indian region. Better estimation of emission inventories could improve aerosol-related simulations. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents three methodologies for determining optimum locations and magnitudes of reactive power compensation in power distribution systems. Method I and Method II are suitable for complex distribution systems with a combination of both radial and ring-main feeders and having different voltage levels. Method III is suitable for low-tension single voltage level radial feeders. Method I is based on an iterative scheme with successive powerflow analyses, with formulation and solution of the optimization problem using linear programming. Method II and Method III are essentially based on the steady state performance of distribution systems. These methods are simple to implement and yield satisfactory results comparable with the results of Method I. The proposed methods have been applied to a few distribution systems, and results obtained for two typical systems are presented for illustration purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design, implementation and evaluation are described of a dual-microcomputer system based on the concept of shared memory. Shared memory is useful for passing large blocks of data and it also provides a means to hold and work with shared data. In addition to the shared memory, a separate bus between the I/O ports of the microcomputers is provided. This bus is utilized for interprocessor synchronization. Software routines helpful in applying the dual-microcomputer system to realistic problems are presented. Performance evaluation of the system is carried out using benchmarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major concern of embedded system architects is the design for low power. We address one aspect of the problem in this paper, namely the effect of executable code compression. There are two benefits of code compression – firstly, a reduction in the memory footprint of embedded software, and secondly, potential reduction in memory bus traffic and power consumption. Since decompression has to be performed at run time it is achieved by hardware. We describe a tool called COMPASS which can evaluate a range of strategies for any given set of benchmarks and display compression ratios. Also, given an execution trace, it can compute the effect on bus toggles, and cache misses for a range of compression strategies. The tool is interactive and allows the user to vary a set of parameters, and observe their effect on performance. We describe an implementation of the tool and demonstrate its effectiveness. To the best of our knowledge this is the first tool proposed for such a purpose.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance-based liquefaction potential analysis was carried out in the present study to estimate the liquefaction return period for Bangalore, India, through a probabilistic approach. In this approach, the entire range of peak ground acceleration (PGA) and earthquake magnitudes was used in the evaluation of liquefaction return period. The seismic hazard analysis for the study area was done using probabilistic approach to evaluate the peak horizontal acceleration at bed rock level. Based on the results of the multichannel analysis of surface wave, it was found that the study area belonged to site class D. The PGA values for the study area were evaluated for site class D by considering the local site effects. The soil resistance for the study area was characterized using the standard penetration test (SPT) values obtained from 450 boreholes. These SPT data along with the PGA values obtained from the probabilistic seismic hazard analysis were used to evaluate the liquefaction return period for the study area. The contour plot showing the spatial variation of factor of safety against liquefaction and the corrected SPT values required for preventing liquefaction for a return period of 475 years at depths of 3 and 6 m are presented in this paper. The entire process of liquefaction potential evaluation, starting from collection of earthquake data, identifying the seismic sources, evaluation of seismic hazard and the assessment of liquefaction return period were carried out, and the entire analysis was done based on the probabilistic approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A structured systems methodology was developed to analyse the problems of production interruptions occurring at random intervals in continuous process type manufacturing systems. At a macro level the methodology focuses on identifying suitable investment policies to reduce interruptions of a total manufacturing system that is a combination of several process plants. An interruption-tree-based simulation model was developed for macroanalysis. At a micro level the methodology focuses on finding the effects of alternative configurations of individual process plants on the overall system performance. A Markov simulation model was developed for microlevel analysis. The methodology was tested with an industry-specific application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, composite reinforcements in which combinations of materials and material forms such as strips, grids, and strips and anchors, depending on requirements have proven to be effective in various ground improvement applications. Composite geogrids studied in this paper belong to the category of composite reinforcements and are useful for bearing capacity improvement. The paper presents evaluation of results of bearing capacity tests conducted oil a composite geogrid, made of composite reinforcement consisting of steel and cement mortar. The study shows that the behavior of composite reinforcements follows the general trends observed in the case of conventional geogrids, with reference to the depth of first layer below the footing, number of layers of reinforcement, and vertical spacing of the reinforcement. Results show that the performance is comparable to that of a conventional polymer geogrid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In each stage of product development, we need to take decisions, by evaluating multiple product alternatives based on multiple criteria. Classical evaluation methods like weighted objectives method assumes certainty about information available during product development. However, designers often must evaluate under uncertainty. Often the likely performance, cost or environmental impacts of a product proposal could be estimated only with certain confidence, which may vary from one proposal to another. In such situations, the classical approaches to evaluation can give misleading results. There is a need for a method that can aid in decision making by supporting quantitative comparison of alternatives to identify the most promising alternative, under uncertain information about the alternatives. A method called confidence weighted objectives method is developed to compare the whole life cycle of product proposals using multiple evaluation criteria under various levels of uncertainty with non crisp values. It estimates the overall worth of proposal and confidence on the estimate, enabling deferment of decision making when decisions cannot be made using current information available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vehicular ad hoc network (VANET) applications are principally categorized into safety and commercial applications. Efficient traffic management for routing an emergency vehicle is of paramount importance in safety applications of VANETs. In the first case, a typical example of a high dense urban scenario is considered to demonstrate the role of penetration ratio for achieving reduced travel time between source and destination points. The major requirement for testing these VANET applications is a realistic simulation approach which would justify the results prior to actual deployment. A Traffic Simulator coupled with a Network Simulator using a feedback loop feature is apt for realistic simulation of VANETs. Thus, in this paper, we develop the safety application using traffic control interface (TraCI), which couples SUMO (traffic simulator) and NS2 (network simulator). Likewise, the mean throughput is one of the necessary performance measures for commercial applications of VANETs. In the next case, commercial applications have been considered wherein the data is transferred amongst vehicles (V2V) and between roadside infrastructure and vehicles (I2V), for which the throughput is assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Displacement-amplifying compliant mechanisms (DaCMs) reported in literature are mostly used for actuator applications. This paper considers them for sensor applications that rely on displacement measurement, and evaluates them objectively. The main goal is to increase the sensitivity under constraints imposed by several secondary requirements and practical constraints. A spring-mass-lever model that effectively captures the addition of a DaCM to a sensor is used in comparing eight DaCMs. We observe that they significantly differ in performance criteria such as geometric advantage, stiffness, natural frequency, mode amplification, factor of safety against failure, cross-axis stiffness, etc., but none excel in all. Thus, a combined figure of merit is proposed using which the most suitable DaCM could be selected for a sensor application. A case-study of a micro machined capacitive accelerometer and another case-study of a vision-based force sensor are included to illustrate the general evaluation and selection procedure of DaCMs with specific applications. Some other insights gained with the analysis presented here were the optimum size-scale for a DaCM, the effect on its natural frequency, limits on its stiffness, and working range of the sensor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic Voltage and Frequency Scaling (DVFS) offers a huge potential for designing trade-offs involving energy, power, temperature and performance of computing systems. In this paper, we evaluate three different DVFS schemes - our enhancement of a Petri net performance model based DVFS method for sequential programs to stream programs, a simple profile based Linear Scaling method, and an existing hardware based DVFS method for multithreaded applications - using multithreaded stream applications, in a full system Chip Multiprocessor (CMP) simulator. From our evaluation, we find that the software based methods achieve significant Energy/Throughput2(ET−2) improvements. The hardware based scheme degrades performance heavily and suffers ET−2 loss. Our results indicate that the simple profile based scheme achieves the benefits of the complex Petri net based scheme for stream programs, and present a strong case for the need for independent voltage/frequency control for different cores of CMPs, which is lacking in most of the state-of-the-art CMPs. This is in contrast to the conclusions of a recent evaluation of per-core DVFS schemes for multithreaded applications for CMPs.