960 resultados para information flow
Resumo:
The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.
Resumo:
Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.
Resumo:
The aim of this study was to develop a theoretical model for information integration to support the deci¬sion making of intensive care charge nurses, and physicians in charge – that is, ICU shift leaders. The study focused on the ad hoc decision-making and immediate information needs of shift leaders during the management of an intensive care unit’s (ICU) daily activities. The term ‘ad hoc decision-making’ was defined as critical judgements that are needed for a specific purpose at a precise moment with the goal of ensuring instant and adequate patient care and a fluent flow of ICU activities. Data collection and research analysis methods were tested in the identification of ICU shift leaders’ ad hoc decision-making. Decision-making of ICU charge nurses (n = 12) and physicians in charge (n = 8) was observed using a think-aloud technique in two university-affiliated Finnish ICUs for adults. The ad hoc decisions of ICU shift leaders were identified using an application of protocol analysis. In the next phase, a structured online question¬naire was developed to evaluate the immediate information needs of ICU shift leaders. A national survey was conducted in all Finnish, university-affiliated hospital ICUs for adults (n = 17). The questionnaire was sent to all charge nurses (n = 515) and physicians in charge (n = 223). Altogether, 257 charge nurses (50%) and 96 physicians in charge (43%) responded to the survey. The survey was also tested internationally in 16 Greek ICUs. From Greece, 50 charge nurses out of 240 (21%) responded to the survey. A think-aloud technique and protocol analysis were found to be applicable for the identification of the ad hoc decision-making of ICU shift leaders. During one day shift leaders made over 200 ad hoc decisions. Ad hoc decisions were made horizontally, related to the whole intensive care process, and vertically, concerning single intensive care incidents. Most of the ICU shift leaders’ ad hoc decisions were related to human resources and know-how, patient information and vital signs, and special treatments. Commonly, this ad hoc decision-making involved several multiprofessional decisions that constituted a bundle of immediate decisions and various information needs. Some of these immediate information needs were shared between the charge nurses and the physicians in charge. The majority of which concerned patient admission, the organisation and management of work, and staff allocation. In general, the information needs of charge nurses were more varied than those of physicians. It was found that many ad hoc deci-sions made by the physicians in charge produced several information needs for ICU charge nurses. This meant that before the task at hand was completed, various kinds of information was sought by the charge nurses to support the decision-making process. Most of the immediate information needs of charge nurses were related to the organisation and management of work and human resources, whereas the information needs of the physicians in charge mainly concerned direct patient care. Thus, information needs differ between professionals even if the goal of decision-making is the same. The results of the international survey confirmed these study results for charge nurses. Both in Finland and in Greece the information needs of charge nurses focused on the organisation and management of work and human resources. Many of the most crucial information needs of Finnish and Greek ICU charge nurses were common. In conclusion, it was found that ICU shift leaders make hundreds of ad hoc decisions during the course of a day related to the allocation of resources and organisation of patient care. The ad hoc decision-making of ICU shift leaders is a complex multi-professional process, which requires a lot of immediate information. Real-time support for information related to patient admission, the organisation and man¬agement of work, and allocation of staff resources is especially needed. The preliminary information integration model can be applied when real-time enterprise resource planning systems are developed for intensive care daily management
Resumo:
In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Process management refers to improving the key functions of a company. The main functions of the case company - project management, procurement, finance, and human resource - use their own separate systems. The case company is in the process of changing its software. Different functions will use the same system in the future. This software change causes changes in some of the company’s processes. Project cash flow forecasting process is one of the changing processes. Cash flow forecasting ensures the sufficiency of money and prepares for possible changes in the future. This will help to ensure the company’s viability. The purpose of the research is to describe a new project cash flow forecasting process. In addition, the aim is to analyze the impacts of the process change, with regard to the project control department’s workload and resources through the process measurement, and how the impacts take the department’s future operations into account. The research is based on process management. Processes, their descriptions, and the way the process management uses the information, are discussed in the theory part of this research. The theory part is based on literature and articles. Project cash flow and forecasting-related benefits are also discussed. After this, the project cash flow forecasting as-is and to-be processes are described by utilizing information, obtained from the theoretical part, as well as the know-how of the project control department’s personnel. Written descriptions and cross-functional flowcharts are used for descriptions. Process measurement is based on interviews with the personnel – mainly cost controllers and department managers. The process change and the integration of two processes will allow work time for other things, for example, analysis of costs. In addition to the quality of the cash flow information will improve compared to the as-is process. Analyzing the department’s other main processes, department’s roles, and their responsibilities should be checked and redesigned. This way, there will be an opportunity to achieve the best possible efficiency and cost savings.
Resumo:
Fluid handling systems such as pump and fan systems are found to have a significant potential for energy efficiency improvements. To deliver the energy saving potential, there is a need for easily implementable methods to monitor the system output. This is because information is needed to identify inefficient operation of the fluid handling system and to control the output of the pumping system according to process needs. Model-based pump or fan monitoring methods implemented in variable speed drives have proven to be able to give information on the system output without additional metering; however, the current model-based methods may not be usable or sufficiently accurate in the whole operation range of the fluid handling device. To apply model-based system monitoring in a wider selection of systems and to improve the accuracy of the monitoring, this paper proposes a new method for pump and fan output monitoring with variable-speed drives. The method uses a combination of already known operating point estimation methods. Laboratory measurements are used to verify the benefits and applicability of the improved estimation method, and the new method is compared with five previously introduced model-based estimation methods. According to the laboratory measurements, the new estimation method is the most accurate and reliable of the model-based estimation methods.
Resumo:
Studying testis is complex, because the tissue has a very heterogeneous cell composition and its structure changes dynamically during development. In reproductive field, the cell composition is traditionally studied by morphometric methods such as immunohistochemistry and immunofluorescence. These techniques provide accurate quantitative information about cell composition, cell-cell association and localization of the cells of interest. However, the sample preparation, processing, staining and data analysis are laborious and may take several working days. Flow cytometry protocols coupled with DNA stains have played an important role in providing quantitative information of testicular cells populations ex vivo and in vitro studies. Nevertheless, the addition of specific cells markers such as intracellular antibodies would allow the more specific identification of cells of crucial interest during spermatogenesis. For this study, adult rat Sprague-Dawley rats were used for optimization of the flow cytometry protocol. Specific steps within the protocol were optimized to obtain a singlecell suspension representative of the cell composition of the starting material. Fixation and permeabilization procedure were optimized to be compatible with DNA stains and fluorescent intracellular antibodies. Optimization was achieved by quantitative analysis of specific parameters such as recovery of meiotic cells, amount of debris and comparison of the proportions of the various cell populations with already published data. As a result, a new and fast flow cytometry method coupled with DNA stain and intracellular antigen detection was developed. This new technique is suitable for analysis of population behavior and specific cells during postnatal testis development and spermatogenesis in rodents. This rapid protocol recapitulated the known vimentin and γH2AX protein expression patterns during rodent testis ontogenesis. Moreover, the assay was applicable for phenotype characterization of SCRbKO and E2F1KO mouse models.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
1. There is concern over the possibility of unwanted environmental change following transgene movement from genetically modified (GM) rapeseed Brassica napus to its wild and weedy relatives. 2. The aim of this research was to develop a remote sensing-assisted methodology to help quantify gene flow from crops to their wild relatives over wide areas. Emphasis was placed on locating sites of sympatry, where the frequency of gene flow is likely to be highest, and on measuring the size of rapeseed fields to allow spatially explicit modelling of wind-mediated pollen-dispersal patterns. 3. Remote sensing was used as a tool to locate rapeseed fields, and a variety of image-processing techniques was adopted to facilitate the compilation of a spatially explicit profile of sympatry between the crop and Brassica rapa. 4. Classified satellite images containing rapeseed fields were first used to infer the spatial relationship between donor rapeseed fields and recipient riverside B. rapa populations. Such images also have utility for improving the efficiency of ground surveys by identifying probable sites of sympatry. The same data were then also used for the calculation of mean field size. 5. This paper forms a companion paper to Wilkinson et al. (2003), in which these elements were combined to produce a spatially explicit profile of hybrid formation over the UK. The current paper demonstrates the value of remote sensing and image processing for large-scale studies of gene flow, and describes a generic method that could be applied to a variety of crops in many countries. 6. Synthesis and applications. The decision to approve or prevent the release of a GM cultivar is made at a national rather than regional level. It is highly desirable that data relating to the decision-making process are collected at the same scale, rather than relying on extrapolation from smaller experiments designed at the plot, field or even regional scale. It would be extremely difficult and labour intensive to attempt to carry out such large-scale investigations without the use of remote-sensing technology. This study used rapeseed in the UK as a model to demonstrate the value of remote sensing in assembling empirical information at a national level.
Resumo:
Smooth flow of production in construction is hampered by disparity between individual trade teams' goals and the goals of stable production flow for the project as a whole. This is exacerbated by the difficulty of visualizing the flow of work in a construction project. While the addresses some of the issues in Building information modeling provides a powerful platform for visualizing work flow in control systems that also enable pull flow and deeper collaboration between teams on and off site. The requirements for implementation of a BIM-enabled pull flow construction management software system based on the Last Planner System™, called ‘KanBIM’, have been specified, and a set of functional mock-ups of the proposed system has been implemented and evaluated in a series of three focus group workshops. The requirements cover the areas of maintenance of work flow stability, enabling negotiation and commitment between teams, lean production planning with sophisticated pull flow control, and effective communication and visualization of flow. The evaluation results show that the system holds the potential to improve work flow and reduce waste by providing both process and product visualization at the work face.
Resumo:
The release of genetically modified plants is governed by regulations that aim to provide an assessment of potential impact on the environment. One of the most important components of this risk assessment is an evaluation of the probability of gene flow. In this review, we provide an overview of the current literature on gene flow from transgenic plants, providing a framework of issues for those considering the release of a transgenic plant into the environment. For some plants gene flow from transgenic crops is well documented, and this information is discussed in detail in this review. Mechanisms of gene flow vary from plant species to plant species and range from the possibility of asexual propagation, short- or long-distance pollen dispersal mediated by insects or wind and seed dispersal. Volunteer populations of transgenic plants may occur where seed is inadvertently spread during harvest or commercial distribution. If there are wild populations related to the transgenic crop then hybridization and eventually introgression in the wild may occur, as it has for herbicide resistant transgenic oilseed rape (Brassica napus). Tools to measure the amount of gene flow, experimental data measuring the distance of pollen dispersal, and experiments measuring hybridization and seed survivability are discussed in this review. The various methods that have been proposed to prevent gene flow from genetically modified plants are also described. The current "transgenic traits'! in the major crops confer resistance to herbicides and certain insects. Such traits could confer a selective advantage (an increase in fitness) in wild plant populations in some circumstances, were gene flow to occur. However, there is ample evidence that gene flow from crops to related wild species occurred before the development of transgenic crops and this should be taken into account in the risk assessment process.
Resumo:
1.There is concern over the possibility of unwanted environmental change following transgene movement from genetically modified (GM) rapeseed Brassica napus to its wild and weedy relatives. 2. The aim of this research was to develop a remote sensing-assisted methodology to help quantify gene flow from crops to their wild relatives over wide areas. Emphasis was placed on locating sites of sympatry, where the frequency of gene flow is likely to be highest, and on measuring the size of rapeseed fields to allow spatially explicit modelling of wind-mediated pollen-dispersal patterns. 3. Remote sensing was used as a tool to locate rapeseed fields, and a variety of image-processing techniques was adopted to facilitate the compilation of a spatially explicit profile of sympatry between the crop and Brassica rapa. 4. Classified satellite images containing rapeseed fields were first used to infer the spatial relationship between donor rapeseed fields and recipient riverside B. rapa populations. Such images also have utility for improving the efficiency of ground surveys by identifying probable sites of sympatry. The same data were then also used for the calculation of mean field size. 5. This paper forms a companion paper to Wilkinson et al. (2003), in which these elements were combined to produce a spatially explicit profile of hybrid formation over the UK. The current paper demonstrates the value of remote sensing and image processing for large-scale studies of gene flow, and describes a generic method that could be applied to a variety of crops in many countries. 6.Synthesis and applications. The decision to approve or prevent the release of a GM cultivar is made at a national rather than regional level. It is highly desirable that data relating to the decision-making process are collected at the same scale, rather than relying on extrapolation from smaller experiments designed at the plot, field or even regional scale. It would be extremely difficult and labour intensive to attempt to carry out such large-scale investigations without the use of remote-sensing technology. This study used rapeseed in the UK as a model to demonstrate the value of remote sensing in assembling empirical information at a national level.
Resumo:
The self-assembly into wormlike micelles of a poly(ethylene oxide)-b-poly(propylene oxide)-b-poly(ethylene oxide) triblock copolymer Pluronic P84 in aqueous salt solution (2 M NaCl) has been studied by rheology, small-angle X-ray and neutron scattering (SAXS/SANS), and light scattering. Measurements of the flow curves by controlled stress rheometry indicated phase separation under flow. SAXS on solutions subjected to capillary flow showed alignment of micelles at intermediate shear rates, although loss of alignment was observed for high shear rates. For dilute solutions, SAXS and static light scattering data on unaligned samples could be superposed over three decades in scattering vector, providing unique information on the wormlike micelle structure over several length scales. SANS data provided information on even shorter length scales, in particular, concerning "blob" scattering from the micelle corona. The data could be modeled based on a system of semiflexible self-avoiding cylinders with a circular cross-section, as described by the wormlike chain model with excluded volume interactions. The micelle structure was compared at two temperatures close to the cloud point (47 degrees C). The micellar radius was found not to vary with temperature in this region, although the contour length increased with increasing temperature, whereas the Kuhn length decreased. These variations result in an increase of the low-concentration radius of gyration with increasing temperature. This was consistent with dynamic light scattering results, and, applying theoretical results from the literature, this is in agreement with an increase in endcap energy due to changes in hydration of the poly(ethylene oxide) blocks as the temperature is increased.