992 resultados para effective utilization
Resumo:
Mode of access: Internet.
Resumo:
"Issued May 1980."
Resumo:
The aim of this study is to investigate the role of operational flexibility for effective project management in the construction industry. The specific objectives are to: a) Identify the determinants of operational flexibility potential in construction project management b) Investigate the contribution of each of the determinants to operational flexibility potential in the construction industry c) Investigate on the moderating factors of operational flexibility potential in a construction project environment d) Investigate whether moderated operational flexibility potential mediates the path between predictors and effective construction project management e) Develop and test a conceptual model of achieving operational flexibility for effective project management The purpose of this study is to findout ways to utilize flexibility inorder to manage uncertain project environment and ultimately achieve effective project management. In what configuration these operational flexibility determinants are demanded by construction project environment in order to achieve project success. This research was conducted in three phases, namely: (i) exploratory phase (ii) questionnaire development phase; and (iii) data collection and analysis phase. The study needs firm level analysis and therefore real estate developers who are members of CREDAI, Kerala Chapter were considered. This study provides a framework on the functioning of operational flexibility, offering guidance to researchers and practitioners for discovering means to gain operational flexibility in construction firms. The findings provide an empirical understanding on kinds of resources and capabilities a construction firm must accumulate to respond flexibly to the changing project environment offering practitioners insights into practices that build firms operational flexibility potential. Firms are dealing with complex, continuous changing and uncertain environments due trends of globalization, technical changes and innovations and changes in the customers’ needs and expectations. To cope with the increasingly uncertain and quickly changing environment firms strive for flexibility. To achieve the level of flexibility that adds value to the customers, firms should look to flexibility from a day to day operational perspective. Each dimension of operational flexibility is derived from competences and capabilities. In this thesis only the influence on customer satisfaction and learning exploitation of flexibility dimensions which directly add value in the customers eyes are studied to answer the followingresearch questions: “What is the impact of operational flexibility on customer satisfaction?.” What are the predictors of operational flexibility in construction industry? .These questions can only be answered after answering the questions like “Why do firms need operational flexibility?” and “how can firms achieve operational flexibility?” in the context of the construction industry. The need for construction firms to be flexible, via the effective utilization of organizational resources and capabilities for improved responsiveness, is important because of the increasing rate of changes in the business environment within which they operate. Achieving operational flexibility is also important because it has a significant correlation with a project effectiveness and hence a firm’s turnover. It is essential for academics and practitioners to recognize that the attainment of operational flexibility involves different types namely: (i) Modification (ii) new product development and (iii) demand management requires different configurations of predictors (i.e., resources, capabilities and strategies). Construction firms should consider these relationships and implement appropriate management practices for developing and configuring the right kind of resources, capabilities and strategies towards achieving different operational flexibility types.
Resumo:
A fundamental principle of the resource-based (RBV) of the firm is that the basis for a competitive advantage lies primarily in the application of bundles of valuable strategic capabilities and resources at a firm’s or supply chain’s disposal. These capabilities enact research activities and outputs produced by industry funded R&D bodies. Such industry lead innovations are seen as strategic industry resources, because effective utilization of industry innovation capacity by sectors such as the Australian beef industry are critical, if productivity levels are to increase. Academics and practitioners often maintain that dynamic supply chains and innovation capacity are the mechanisms most likely to deliver performance improvements in national industries.. Yet many industries are still failing to capitalise on these strategic resources. In this research, we draw on the resource-based view (RBV) and embryonic research into strategic supply chain capabilities. We investigate how two strategic supply chain capabilities (supply chain performance differential capability and supply chain dynamic capability) influence industry-led innovation capacity utilization and provide superior performance enhancements to the supply chain. In addition, we examine the influence of size of the supply chain operative as a control variable. Results indicate that both small and large supply chain operatives in this industry believe these strategic capabilities influence and function as second-order latent variables of this strategic supply chain resource. Additionally respondents acknowledge size does impacts both the amount of influence these strategic capabilities have and the level of performance enhancement expected by supply chain operatives from utilizing industry-led innovation capacity. Results however also indicate contradiction in this industry and in relation to existing literature when it comes to utilizing such e-resources.
Resumo:
Increasing penetration of photovoltaic (PV) as well as increasing peak load demand has resulted in poor voltage profile for some residential distribution networks. This paper proposes coordinated use of PV and Battery Energy Storage (BES) to address voltage rise and/or dip problems. The reactive capability of PV inverter combined with droop based BES system is evaluated for rural and urban scenarios (having different R/X ratios). Results show that reactive compensation from PV inverters alone is sufficient to maintain acceptable voltage profile in an urban scenario (low resistance feeder), whereas, coordinated PV and BES support is required for the rural scenario (high resistance feeder). Constant as well as variable droop based BES schemes are analyzed. The required BES sizing and associated cost to maintain the acceptable voltage profile under both schemes is presented. Uncertainties in PV generation and load are considered, with probabilistic estimation of PV generation and randomness in load modeled to characterize the effective utilization of BES. Actual PV generation data and distribution system network data is used to verify the efficacy of the proposed method.
Resumo:
Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
In linear elastic fracture mechanics (LEFM), Irwin's crack closure integral (CCI) is one of the signficant concepts for the estimation of strain energy release rates (SERR) G, in individual as well as mixed-mode configurations. For effective utilization of this concept in conjunction with the finite element method (FEM), Rybicki and Kanninen [Engng Fracture Mech. 9, 931 938 (1977)] have proposed simple and direct estimations of the CCI in terms of nodal forces and displacements in the elements forming the crack tip from a single finite element analysis instead of the conventional two configuration analyses. These modified CCI (MCCI) expressions are basically element dependent. A systematic derivation of these expressions using element stress and displacement distributions is required. In the present work, a general procedure is given for the derivation of MCCI expressions in 3D problems with cracks. Further, a concept of sub-area integration is proposed which facilitates evaluation of SERR at a large number of points along the crack front without refining the finite element mesh. Numerical data are presented for two standard problems, a thick centre-cracked tension specimen and a semi-elliptical surface crack in a thick slab. Estimates for the stress intensity factor based on MCCI expressions corresponding to eight-noded brick elements are obtained and compared with available results in the literature.
Resumo:
Thermal power stations using pulverized coal as fuel generate large quantities of fly ash as a byproduct, which has created environmental and disposal problems. Using fly ash for gainful applications will solve these problems. Among the various possible uses for fly ash, the most massive and effective utilization is in geotechnical engineering applications like backfill material, construction of embankments, as a subbase material, etc. A proper understanding of fly ash-soil mixes is likely to provide viable solutions for its large-scale utilization. Earlier studies initiated in the laboratory have resulted in a good understanding of the California Bearing Ratio (CBR) behavior of fly ash-soil mixes. Subsequently, in order to increase the CBR value, cement has been tried as an additive to fly ash-soil mixes. This paper reports the results.
Resumo:
Computational grids with multiple batch systems (batch grids) can be powerful infrastructures for executing long-running multi-component parallel applications. In this paper, we evaluate the potential improvements in throughput of long-running multi-component applications when the different components of the applications are executed on multiple batch systems of batch grids. We compare the multiple batch executions with executions of the components on a single batch system without increasing the number of processors used for executions. We perform our analysis with a foremost long-running multi-component application for climate modeling, the Community Climate System Model (CCSM). We have built a robust simulator that models the characteristics of both the multi-component application and the batch systems. By conducting large number of simulations with different workload characteristics and queuing policies of the systems, processor allocations to components of the application, distributions of the components to the batch systems and inter-cluster bandwidths, we show that multiple batch executions lead to 55% average increase in throughput over single batch executions for long-running CCSM. We also conducted real experiments with a practical middleware infrastructure and showed that multi-site executions lead to effective utilization of batch systems for executions of CCSM and give higher simulation throughput than single-site executions. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
In this paper we derive an approach for the effective utilization of thermodynamic data in phase-field simulations. While the most widely used methodology for multi-component alloys is following the work by Eiken et al. (2006), wherein, an extrapolative scheme is utilized in conjunction with the TQ interface for deriving the driving force for phase transformation, a corresponding simplistic method based on the formulation of a parabolic free-energy model incorporating all the thermodynamics has been laid out for binary alloys in the work by Folch and Plapp (2005). In the following, we extend this latter approach for multi-component alloys in the framework of the grand-potential formalism. The coupling is applied for the case of the binary eutectic solidification in the Cr-Ni alloy and two-phase solidification in the ternary eutectic alloy (Al-Cr-Ni). A thermodynamic justification entails the basis of the formulation and places it in context of the bigger picture of Integrated Computational Materials Engineering. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
Aspects of the Nigerian fishing industry are outlined to explain the concept of fishing systems viability which is often influenced by a combination of factors including biological productivity, as well as technical, economic and social factors. The productivity of the aquatic environments can be increased by the construction and installation of artificial reefs and fish aggregating devices. These man-made structures provide shelters, food and breeding grounds for fin fish and shell fish. The habitat enhancement techniques are appropriate, efficient, cheap and simple strategic options for increase in fish production. Recommendations for effective utilization and long term management are outlined.
Resumo:
This paper introduces current work in collating data from different projects using soil mix technology and establishing trends using artificial neural networks (ANNs). Variation in unconfined compressive strength as a function of selected soil mix variables (e.g., initial soil water content and binder dosage) is observed through the data compiled from completed and on-going soil mixing projects around the world. The potential and feasibility of ANNs in developing predictive models, which take into account a large number of variables, is discussed. The main objective of the work is the management and effective utilization of salient variables and the development of predictive models useful for soil mix technology design. Based on the observed success in the predictions made, this paper suggests that neural network analysis for the prediction of properties of soil mix systems is feasible. © ASCE 2011.
Resumo:
Surimi was prepared from silver carp with an aim to put this underutilized fish for profitable use. The mince prepared was washed twice with chilled water (5°C) using mince to water ratio (w/v) of 1:2 for 5-6 minutes each. After final dewatering to moisture content to about 80%; half the quantity of washed minced meat was mixed with cryoprotectants (4% sorbitol, 4% sucrose and 0.3% sodium tripolyphosphate) to produce surimi. The prepared surimi and the dewatered minced meat were packed in LDPE bags, frozen using a plate freezer and stored at -20°C. Surimi and dewatered minced meat from frozen storage were used as base material for production of fish cakes. These were fried at 160°C for 3 to 4 minutes before serving for organoleptic test. Changes in salt soluble nitrogen, total volatile base nitrogen, non-protein nitrogen, peroxide value and free fatty acid of surimi and dewatered mince were estimated at every ten days interval during the storage period of 3 months. The study has indicated that frozen storage of surimi could be a potential method for effective utilization of silver carp. This surimi when incorporated in fish cakes yielded products which retained the shelf life even up to 90 days of storage.