899 resultados para system implementation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target’s three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the recent years, smart grids have received great public attention. Many proposed functionalities rely on power electronics, which play a key role in the smart grid, together with the communication network. However, “smartness” is not the driver that alone motivates the research towards distribution networks based on power electronics; the network vulnerability to natural hazards has resulted in tightening requirements for the supply security, set both by electricity end-users and authorities. Because of the favorable price development and advancements in the field, direct current (DC) distribution has become an attractive alternative for distribution networks. In this doctoral dissertation, power electronic converters for a low-voltage DC (LVDC) distribution system are investigated. These include the rectifier located at the beginning of the LVDC network and the customer-end inverter (CEI) on the customer premises. Rectifier topologies are introduced, and according to the LVDC system requirements, topologies are chosen for the analysis. Similarly, suitable CEI topologies are addressed and selected for study. Application of power electronics into electricity distribution poses some new challenges. Because the electricity end-user is supplied with the CEI, it is responsible for the end-user voltage quality, but it also has to be able to supply adequate current in all operating conditions, including a short-circuit, to ensure the electrical safety. Supplying short-circuit current with power electronics requires additional measures, and therefore, the short-circuit behavior is described and methods to overcome the high-current supply to the fault are proposed. Power electronic converters also produce common-mode (CM) and radio-frequency (RF) electromagnetic interferences (EMI), which are not present in AC distribution. Hence, their magnitudes are investigated. To enable comprehensive research on the LVDC distribution field, a research site was built into a public low-voltage distribution network. The implementation was a joint task by the LVDC research team of Lappeenranta University of Technology and a power company Suur-Savon S¨ahk¨o Oy. Now, the measurements could be conducted in an actual environment. This is important especially for the EMI studies. The main results of the work concern the short-circuit operation of the CEI and the EMI issues. The applicability of the power electronic converters to electricity distribution is demonstrated, and suggestions for future research are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past decades, educational large-scale reforms have been elaborated and implemented in many countries and often resulted in partial or complete failure. These results brought researchers to study policy processes in order to address this particular challenge. Studies on implementation processes brought to light an existing causal relationship between the implementation process and the effectiveness of a reform. This study aims to describe the implementation process of educational change in Finland, who produced efficient educational reforms over the last 50 years. The case study used for the purpose of this study is the national reform of undivided basic education (yhtenäinen peruskoulu) implemented in the end of the 1990s. Therefore, this research aims to describe how the Finnish undivided basic education reform was implemented. This research was carried out using a pluralist and structuralist approach of policy process and was analyzed according to the hybrid model of implementation process. The data were collected using a triangulation of methods, i.e. documentary research, interviews and questionnaires. The data were qualitative and were analyzed using content analysis methods. This study concludes that the undivided basic education reform was applied in a very decentralized manner, which is a reflection of the decentralized system present in Finland. Central authorities provided a clear vision of the purpose of the reform, but did not control the implementation process. They rather provided extensive support in the form of transmission of information and development of collaborative networks. Local authorities had complete autonomy in terms of decision-making and implementation process. Discussions, debates and decisions regarding implementation processes took place at the local level and included the participation of all actors present on the field. Implementation methods differ from a region to another, with is the consequence of the variation of the level of commitment of local actors but also the diversity of local realities. The reform was implemented according to existing structures and values, which means that it was in cohesion with the context in which it was implemented. These results cannot be generalized to all implementation processes of educational change in Finland but give a great insight of what could be the model used in Finland. Future studies could intent to confirm the model described here by studying other reforms that took place in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past decades, educational large-scale reforms have been elaborated and implemented in many countries and often resulted in partial or complete failure. These results brought researchers to study policy processes in order to address this particular challenge. Studies on implementation processes brought to light an existing causal relationship between the implementation process and the effectiveness of a reform. This study aims to describe the implementation process of educational change in Finland, who produced efficient educational reforms over the last 50 years. The case study used for the purpose of this study is the national reform of undivided basic education (yhtenäinen peruskoulu) implemented in the end of the 1990s. Therefore, this research aims to describe how the Finnish undivided basic education reform was implemented. This research was carried out using a pluralist and structuralist approach of policy process and was analyzed according to the hybrid model of implementation process. The data were collected using a triangulation of methods, i.e. documentary research, interviews and questionnaires. The data were qualitative and were analyzed using content analysis methods. This study concludes that the undivided basic education reform was applied in a very decentralized manner, which is a reflection of the decentralized system present in Finland. Central authorities provided a clear vision of the purpose of the reform, but did not control the implementation process. They rather provided extensive support in the form of transmission of information and development of collaborative networks. Local authorities had complete autonomy in terms of decision-making and implementation process. Discussions, debates and decisions regarding implementation processes took place at the local level and included the participation of all actors present on the field. Implementation methods differ from a region to another, with is the consequence of the variation of the level of commitment of local actors but also the diversity of local realities. The reform was implemented according to existing structures and values, which means that it was in cohesion with the context in which it was implemented. These results cannot be generalized to all implementation processes of educational change in Finland but give a great insight of what could be the model used in Finland. Future studies could intent to confirm the model described here by studying other reforms that took place in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The successful performance of company in the market relates to the quality management of human capital aiming to improve the company's internal performance and external implementation of the core business strategy. Companies with matrix structure focusing on realization and development of innovation and technologies for the uncertain market need to select thoroughly the approach to HR management system. Human resource management has a significant impact on the organization and use a variety of instruments such as corporate information systems to fulfill their functions and objectives. There are three approaches to strategic control management depending on major impact on the major interference in employee decision-making, development of skills and his integration into the business strategy. The mainstream research has focus only on the framework of strategic planning of HR and general productivity of firm, but not on features of organizational structure and corporate software capabilities for human capital. This study tackles the before mentioned challenges, typical for matrix organization, by using the HR control management tools and corporate information system. The detailed analysis of industry producing and selling electromotor and heating equipment in this master thesis provides the opportunity to improve system for HR control and displays its application in the ERP software. The results emphasize the sustainable role of matrix HR input control for creating of independent project teams for matrix structure who are able to respond to various uncertainties of the market and use their skills for improving performance. Corporate information systems can be integrated into input control system by means of output monitoring to regulate and evaluate the processes of teams, using key performance indicators and reporting systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Chinese welding industry is growing every year due to rapid development of the Chinese economy. Increasingly, companies around the world are looking to use Chinese enterprises as their cooperation partners. However, the Chinese welding industry also has its weaknesses, such as relatively low quality and weak management. A modern, advanced welding management system appropriate for local socio-economic conditions is required to enable Chinese enterprises to enhance further their business development. The thesis researches the design and implementation of a new welding quality management system for China. This new system is called ‗welding production quality control management model in China‘ (WQMC). Constructed on the basis of analysis of a survey and in-company interviews, the welding management system comprises the following different elements and perspectives: a ‗Localized congenital existing problem resolution strategies‘ (LCEPRS) database, a ‗human factor designed training system‘ (HFDT) training strategy, the theory of modular design, ISO 3834 requirements, total welding management (TWM), and lean manufacturing (LEAN) theory. The methods used in the research are literature review, questionnaires, interviews, and the author‘s model design experiences and observations, i.e. the approach is primarily qualitative and phenomenological. The thesis describes the design and implementation of a HFDT strategy in Chinese welding companies. Such training is an effective way to increase employees‘ awareness of quality and issues associated with quality assurance. The study identified widely existing problems in the Chinese welding industry and constructed a LCEPRS database that can be used in efforts to mitigate and avoid common problems. The work uses the theory of modular design, TWM and LEAN as tools for the implementation of the WQMC system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This qualitative study explored secondary teachers' perceptions of scheduling in relation to pedagogy, curriculum, and observation of student learning. Its objective was to determine the best way to organize the scheduling for the delivery of Ontario's new 4-year curriculum. Six participants were chosen. Two were teaching in a semestered timetable, 1 in a traditional timetable, and 3 had experience in both schedules. Participants related a pressure cooker "lived experience" with weaker students in the semester system experiencing a particularly harsh environment. The inadequate amount of time for review in content-heavy courses, gap scheduling problems, catch-up difficulties for students missing classes, and the fast pace of semestering are identified as factors negatively impacting on these students. Government testing adds to the pressure by shifting teachers' time and attention in the classroom from deeper learning to a superficial coverage of material, from curriculum as lived to curriculum as text to be covered. Scheduling choice should be available in public education to accommodate the needs of all students. Curriculum guidelines need to be revamped to reflect the content that teachers believe is necessary for a successful course delivery. Applied level courses need to be developed for students who are not academically inferior but learn differently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If you want to know whether a property is true or not in a specific algebraic structure,you need to test that property on the given structure. This can be done by hand, which can be cumbersome and erroneous. In addition, the time consumed in testing depends on the size of the structure where the property is applied. We present an implementation of a system for finding counterexamples and testing properties of models of first-order theories. This system is supposed to provide a convenient and paperless environment for researchers and students investigating or studying such models and algebraic structures in particular. To implement a first-order theory in the system, a suitable first-order language.( and some axioms are required. The components of a language are given by a collection of variables, a set of predicate symbols, and a set of operation symbols. Variables and operation symbols are used to build terms. Terms, predicate symbols, and the usual logical connectives are used to build formulas. A first-order theory now consists of a language together with a set of closed formulas, i.e. formulas without free occurrences of variables. The set of formulas is also called the axioms of the theory. The system uses several different formats to allow the user to specify languages, to define axioms and theories and to create models. Besides the obvious operations and tests on these structures, we have introduced the notion of a functor between classes of models in order to generate more co~plex models from given ones automatically. As an example, we will use the system to create several lattices structures starting from a model of the theory of pre-orders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'utilisation des méthodes formelles est de plus en plus courante dans le développement logiciel, et les systèmes de types sont la méthode formelle qui a le plus de succès. L'avancement des méthodes formelles présente de nouveaux défis, ainsi que de nouvelles opportunités. L'un des défis est d'assurer qu'un compilateur préserve la sémantique des programmes, de sorte que les propriétés que l'on garantit à propos de son code source s'appliquent également au code exécutable. Cette thèse présente un compilateur qui traduit un langage fonctionnel d'ordre supérieur avec polymorphisme vers un langage assembleur typé, dont la propriété principale est que la préservation des types est vérifiée de manière automatisée, à l'aide d'annotations de types sur le code du compilateur. Notre compilateur implante les transformations de code essentielles pour un langage fonctionnel d'ordre supérieur, nommément une conversion CPS, une conversion des fermetures et une génération de code. Nous présentons les détails des représentation fortement typées des langages intermédiaires, et les contraintes qu'elles imposent sur l'implantation des transformations de code. Notre objectif est de garantir la préservation des types avec un minimum d'annotations, et sans compromettre les qualités générales de modularité et de lisibilité du code du compilateur. Cet objectif est atteint en grande partie dans le traitement des fonctionnalités de base du langage (les «types simples»), contrairement au traitement du polymorphisme qui demande encore un travail substantiel pour satisfaire la vérification de type.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire de maîtrise cherche à jeter un regard approfondi sur les cas des jeunes contrevenants référés au processus de médiation à Trajet, un organisme de justice alternative à Montréal. Plus précisément, les objectifs sont de décrire les caractéristiques des cas référés, d’explorer leur relation avec la participation au processus de médiation et avec le résultat de celui-ci, et de comparer ces mêmes éléments en regard de deux périodes inclues dans le projet : celle où s’appliquait la Loi sur les jeunes contrevenants et celle où la Loi sur le système de justice pénale pour les adolescents assortie de l’Entente cadre sont entrés en vigueur. Des méthodes de recherche quantitatives ont été utilisées pour analyser les cas référés à Trajet sur une période de 10 ans (1999-2009). Des analyses descriptives ont permis d’établir les caractéristiques communes ou divergentes entre les cas référés à Trajet et ceux référés à d’autres programmes de médiation. Des analyses bi-variées ont révélé qu’une relation significative existait entre la participation au processus de médiation et l’âge et le sexe des contrevenants, le nombre de crimes commis par ceux-ci, le nombre de victimes impliquées, le type de victime, l’âge et le sexe des victimes et, le délai entre la commission du crime et le transfert du dossier à Trajet. La réalisation d’une régression logistique a révélé que trois caractéristiques prédisent de manière significative la participation à la médiation : l’âge des contrevenants, le nombre de victimes impliquées et le délai entre la commission du crime et le transfert du dossier à Trajet. La faible proportion d’échecs du processus de médiation a rendu inutile la réalisation d’analyses bi et multi-variées eu égard au résultat du processus de médiation. Des différences significatives ont été trouvées entre les cas référés en médiation sous la Loi sur les jeunes contrevenants et ceux référés sous la Loi sur le système de justice pénale pour les adolescents assortie à l’Entente cadre en ce qui a trait au type de crime, au nombre de délits commis, à l’existence d’une référence précédente à Trajet, aux raisons pour lesquelles la médiation n’a pas eu lieu, à la restitution sous toutes ces formes et, plus spécialement, la restitution financière. La participation à la médiation est apparue plus probable sous la LSJPA que sous la LJC. Des corrélations partielles ont montré que différentes caractéristiques étaient associées à la participation à la médiation dans les deux périodes en question. Seule une caractéristique, le sexe des victimes, s’est avérée reliée significativement à la participation à la médiation tant sous la LJC que sous la LSJPA. Les résultats de ce projet ont donné lieu à une connaissance plus approfondie des cas référés à Trajet pour un processus de médiation et à une exploration de l’impact que la LSJPA et l’Entente cadre sur ce processus. Toutefois, l’échantillon étant limité au cas traités à Trajet ne permet pas la généralisation de ces résultats à l’ensemble des cas référés aux organismes de justice alternative du Québec pour le processus de médiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

À une époque où l'immigration internationale est de plus en plus difficile et sélective, le statut de réfugié constitue un bien public précieux qui permet à certains non-citoyens l'accès et l'appartenance au pays hôte. Reposant sur le jugement discrétionnaire du décideur, le statut de réfugié n’est accordé qu’aux demandeurs qui établissent une crainte bien fondée de persécution en cas de retour dans leur pays d'origine. Au Canada, le plus important tribunal administratif indépendant, la Commission de l'immigration et du statut de réfugié du Canada (CISR), est chargé d’entendre les demandeurs d'asile et de rendre des décisions de statut de réfugié. Cette thèse cherche à comprendre les disparités dans le taux d’octroi du statut de réfugié entre les décideurs de la CISR qui sont politiquement nommés. Au regard du manque de recherches empiriques sur la manière avec laquelle le Canada alloue les possibilités d’entrée et le statut juridique pour les non-citoyens, il était nécessaire de lever le voile sur le fonctionnement de l’administration sur cette question. En explorant la prise de décision relative aux réfugiés à partir d'une perspective de Street Level Bureaucracy Theory (SLBT) et une méthodologie ethnographique qui combine l'observation directe, les entretiens semi-structurés et l'analyse de documents, l'étude a d'abord cherché à comprendre si la variation dans le taux d’octroi du statut était le résultat de différences dans les pratiques et le raisonnement discrétionnaires du décideur et ensuite à retracer les facteurs organisationnels qui alimentent les différences. Dans la lignée des travaux de SLBT qui documentent la façon dont la situation de travail structure la discrétion et l’importance des perceptions individuelles dans la prise de décision, cette étude met en exergue les différences de fond parmi les décideurs concernant les routines de travail, la conception des demandeurs d’asile, et la meilleure façon de mener leur travail. L’analyse montre comment les décideurs appliquent différentes approches lors des audiences, allant de l’interrogatoire rigide à l’entrevue plus flexible. En dépit des contraintes organisationnelles qui pèsent sur les décideurs pour accroître la cohérence et l’efficacité, l’importance de l’évaluation de la crédibilité ainsi que l’invisibilité de l’espace de décision laissent suffisamment de marge pour l’exercice d’un pouvoir discrétionnaire. Même dans les environnements comme les tribunaux administratifs où la surabondance des règles limite fortement la discrétion, la prise de décision est loin d’être synonyme d’adhésion aux principes de neutralité et hiérarchie. La discrétion est plutôt imbriquée dans le contexte de routines d'interaction, de la situation de travail, de l’adhésion aux règles et du droit. Même dans les organisations qui institutionnalisent et uniformisent la formation et communiquent de façon claire leurs demandes aux décideurs, le caractère discrétionnaire de la décision est par la nature difficile, voire impossible, à contrôler et discipliner. Lorsqu'ils sont confrontés à l'ambiguïté des objectifs et aux exigences qui s’opposent à leur pouvoir discrétionnaire, les décideurs réinterprètent la définition de leur travail et banalisent leurs pratiques. Ils formulent une routine de rencontre qui est acceptable sur le plan organisationnel pour évaluer les demandeurs face à eux. Cette thèse montre comment les demandeurs, leurs témoignages et leurs preuves sont traités d’une manière inégale et comment ces traitements se répercutent sur la décision des réfugiés.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present scenario of energy demand overtaking energy supply top priority is given for energy conservation programs and policies. Most of the process plants are operated on continuous basis and consumes large quantities of energy. Efficient management of process system can lead to energy savings, improved process efficiency, lesser operating and maintenance cost, and greater environmental safety. Reliability and maintainability of the system are usually considered at the design stage and is dependent on the system configuration. However, with the growing need for energy conservation, most of the existing process systems are either modified or are in a state of modification with a view for improving energy efficiency. Often these modifications result in a change in system configuration there by affecting the system reliability. It is important that system modifications for improving energy efficiency should not be at the cost of reliability. Any new proposal for improving the energy efficiency of the process or equipments should prove itself to be economically feasible for gaining acceptance for implementation. In order to arrive at the economic feasibility of the new proposal, the general trend is to compare the benefits that can be derived over the lifetime as well as the operating and maintenance costs with the investment to be made. Quite often it happens that the reliability aspects (or loss due to unavailability) are not taken into consideration. Plant availability is a critical factor for the economic performance evaluation of any process plant.The focus of the present work is to study the effect of system modification for improving energy efficiency on system reliability. A generalized model for the valuation of process system incorporating reliability is developed, which is used as a tool for the analysis. It can provide an awareness of the potential performance improvements of the process system and can be used to arrive at the change in process system value resulting from system modification. The model also arrives at the pay back of the modified system by taking reliability aspects also into consideration. It is also used to study the effect of various operating parameters on system value. The concept of breakeven availability is introduced and an algorithm for allocation of component reliabilities of the modified process system based on the breakeven system availability is also developed. The model was applied to various industrial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shrimp Aquaculture has provided tremendous opportunity for the economic and social upliftment of rural communities in the coastal areas of our country Over a hundred thousand farmers, of whom about 90% belong to the small and marginal category, are engaged in shrimp farming. Penaeus monodon is the most predominant cultured species in India which is mainly exported to highly sophisticated, quality and safety conscious world markets. Food safety has been of concem to humankind since the dawn of history and the concern about food safety resulted in the evolution of a cost effective, food safety assurance method, the Hazard Analysis Critical Control Point (HACCP). Considering the major contribution of cultured Penaeus monodon to the total shrimp production and the economic losses encountered due to disease outbreak and also because traditional methods of quality control and end point inspection cannot guarantee the safety of our cultured seafood products, it is essential that science based preventive approaches like HACCP and Pre requisite Programmes (PRP) be implemented in our shrimp farming operations. PRP is considered as a support system which provides a solid foundation for HACCP. The safety of postlarvae (PL) supplied for brackish water shrimp farming has also become an issue of concern over the past few years. The quality and safety of hatchery produced seeds have been deteriorating and disease outbreaks have become very common in hatcheries. It is in this context that the necessity for following strict quarantine measures with standards and code of practices becomes significant. Though there were a lot of hue and cry on the need for extending the focus of seafood safety assurance from processing and exporting to the pre-harvest and hatchery rearing phases, an experimental move in this direction has been rare or nil. An integrated management system only can assure the effective control of the quality, hygiene and safety related issues. This study therefore aims at designing a safety and quality management system model for implementation in shrimp farming and hatchery operations by linking the concepts of HACCP and PRP.