962 resultados para Reliability management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electronics industry is developing rapidly together with the increasingly complex problem of microelectronic equipment cooling. It has now become necessary for thermal design engineers to consider the problem of equipment cooling at some level. The use of Computational Fluid Dynamics (CFD) for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimisation of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. This paper will discuss results from an investigation into the accuract of currently used turbulence models. Also a newly formulated transitional hybrid turbulence model will be introduced with comparisonsaagainst experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article consists of a PowerPoint presentation on integrated reliability and prognostics prediction methodology for power electronic modules. The areas discussed include: power electronics flagship; design for reliability; IGBT module; design for manufacture; power module components; reliability prediction techniques; failure based reliability; etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two important strands of research within the literature on Environmental Operations Management (EOM) relate to environmental approach and performance. Often in this research the links between environmental approach, environmental performance and EOM are considered separately with little consideration given to the interrelationships between them. This study develops and tests a theoretical framework that combines these two strands to explore how UK food manufacturers approach EOM. The framework considers the relationships between an environmentally pro-active strategic orientation, EOM and environmental and cost performance. A cross-sectional survey was developed to collect data from a sample of 1200 food manufacturing firms located within the UK. Responses were sought from production and operations managers who are knowledgeable about the environmental operations practices within their firms. A total of 149 complete and useable responses were obtained. The reliability and validity of the scales used in the survey were tested using exploratory factor analysis, prior to the testing of the hypotheses underpinning the theoretical framework using hierarchical regression analysis. Our results generate support for a link between environmental proactivity, environmental practices and performance, consistent with the natural resource-based view (NRBV) and a number of studies in the extant literature. In considering environmental proactivity as a standalone concept that influences the implementation of environmental practices outlined in the NRBV, our study generates some novel insights into these links. Further our results provide some interesting insights for managers within the food industry who can identify the potential benefits of certain practices for performance within this unique context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the aim to provide new insights into operational cetacean-fishery interactions in Atlantic waters, this thesis assesses interactions of cetaceans with Spanish and Portuguese fishing vessels operating in Iberian and South West Atlantic waters. Different opportunistic research methodologies were applied, including an interview survey with fishers (mainly skippers) and onboard observations by fisheries observers and skippers, to describe different types of interactions and to identify potential hotspots for cetacean-fishery interactions and the cetacean species most involved, and to quantify the extent and the consequences of these interactions in terms of benefits and costs for cetaceans and fisheries. In addition, the suitability of different mitigation strategies was evaluated and discussed. The results of this work indicate that cetaceans interact frequently with Spanish and Portuguese fishing vessels, sometimes in a beneficial way (e.g. cetaceans indicate fish schools in purse seine fisheries), but mostly with negative consequences (depredation on catch, gear damage and cetacean bycatch). Significant economic loss and high bycatch rates are, however, only reported for certain fisheries and associated with particular cetacean species. In Galician fisheries, substantial economic loss was reported as a result of bottlenose dolphins damaging artisanal coastal gillnets, while high catch loss may arise from common dolphins scattering fish in purse seine fisheries. High cetacean bycatch mortality arises in trawl fisheries, mainly of common dolphin and particularly during trawling in water depths below 350 m, and in coastal set gillnet fisheries (mainly common and bottlenose dolphins). In large-scale bottom-set longline fisheries in South West Atlantic waters, sperm whales may significantly reduce catch rates through depredation on catch. The high diversity of cetacean-fishery interactions observed in the study area indicates that case-specific management strategies are needed to reduce negative impacts on fisheries and cetaceans. Acoustic deterrent devices (pingers) may be used to prevent small cetaceans from approaching and getting entangled in purse seines and set gillnets, although possible problems include cetacean habituation to the pinger sounds, as well as negative side effects on non-target cetaceans (habitat exclusion) and fisheries target species (reduced catch rates). For sardine and horse mackerel, target species of Iberian Atlantic fisheries, no aversive reaction to pinger sounds was detected during tank experiments conducted in the scope of this thesis. Bycatch in trawls may be reduced by the implementation of time/area restrictions of fishing activity. In addition, the avoidance of fishing areas with high cetacean abundance combined with the minimization of fishery-specific sound cues that possibly attract cetaceans, may also help to decrease interactions. In large-scale bottom-set longline fisheries, cetacean depredation on catch may be reduced by covering hooked fish with net sleeves ("umbrellas") provided that catch rates are not negatively affected by this gear modification. Trap fishing, as an alternative fishing method to bottom-set gillnetting and longlining, also has the potential to reduce cetacean bycatch and depredation, given that fish catch rates are similar to the rates obtained by bottom-set gillnets and longlines, whereas cetacean by-catch is unlikely. Economic incentives, such as the eco-certification of dolphin-safe fishing methods, should be promoted in order to create an additional source of income for fishers negatively affected by interactions with cetaceans, which, in turn, may also increase fishers’ willingness to accept and adopt mitigation measures. Although the opportunistic sampling methods applied in this work have certain restrictions concerning their reliability and precision, the results are consistent with previous studies in the same area. Moreover, they allow for the active participation of fishers that can provide important complementary ecological and technical knowledge required for cetacean management and conservation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among PET radiotracers, FDG seems to be quite accepted as an accurate oncology diagnostic tool, frequently helpful also in the evaluation of treatment response and in radiation therapy treatment planning for several cancer sites. To the contrary, the reliability of Choline as a tracer for prostate cancer (PC) still remains an object of debate for clinicians, including radiation oncologists. This review focuses on the available data about the potential impact of Choline-PET in the daily clinical practice of radiation oncologists managing PC patients. In summary, routine Choline-PET is not indicated for initial local T staging, but it seems better than conventional imaging for nodal staging and for all patients with suspected metastases. In these settings, Choline-PET showed the potential to change patient management. A critical limit remains spatial resolution, limiting the accuracy and reliability for small lesions. After a PSA rise, the problem of the trigger PSA value remains crucial. Indeed, the overall detection rate of Choline-PET is significantly increased when the trigger PSA, or the doubling time, increases, but higher PSA levels are often a sign of metastatic spread, a contraindication for potentially curable local treatments such as radiation therapy. Even if several published data seem to be promising, the current role of PET in treatment planning in PC patients to be irradiated still remains under investigation. Based on available literature data, all these issues are addressed and discussed in this review.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data management consists of collecting, storing, and processing the data into the format which provides value-adding information for decision-making process. The development of data management has enabled of designing increasingly effective database management systems to support business needs. Therefore as well as advanced systems are designed for reporting purposes, also operational systems allow reporting and data analyzing. The used research method in the theory part is qualitative research and the research type in the empirical part is case study. Objective of this paper is to examine database management system requirements from reporting managements and data managements perspectives. In the theory part these requirements are identified and the appropriateness of the relational data model is evaluated. In addition key performance indicators applied to the operational monitoring of production are studied. The study has revealed that the appropriate operational key performance indicators of production takes into account time, quality, flexibility and cost aspects. Especially manufacturing efficiency has been highlighted. In this paper, reporting management is defined as a continuous monitoring of given performance measures. According to the literature review, the data management tool should cover performance, usability, reliability, scalability, and data privacy aspects in order to fulfill reporting managements demands. A framework is created for the system development phase based on requirements, and is used in the empirical part of the thesis where such a system is designed and created for reporting management purposes for a company which operates in the manufacturing industry. Relational data modeling and database architectures are utilized when the system is built for relational database platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present scenario of energy demand overtaking energy supply top priority is given for energy conservation programs and policies. Most of the process plants are operated on continuous basis and consumes large quantities of energy. Efficient management of process system can lead to energy savings, improved process efficiency, lesser operating and maintenance cost, and greater environmental safety. Reliability and maintainability of the system are usually considered at the design stage and is dependent on the system configuration. However, with the growing need for energy conservation, most of the existing process systems are either modified or are in a state of modification with a view for improving energy efficiency. Often these modifications result in a change in system configuration there by affecting the system reliability. It is important that system modifications for improving energy efficiency should not be at the cost of reliability. Any new proposal for improving the energy efficiency of the process or equipments should prove itself to be economically feasible for gaining acceptance for implementation. In order to arrive at the economic feasibility of the new proposal, the general trend is to compare the benefits that can be derived over the lifetime as well as the operating and maintenance costs with the investment to be made. Quite often it happens that the reliability aspects (or loss due to unavailability) are not taken into consideration. Plant availability is a critical factor for the economic performance evaluation of any process plant.The focus of the present work is to study the effect of system modification for improving energy efficiency on system reliability. A generalized model for the valuation of process system incorporating reliability is developed, which is used as a tool for the analysis. It can provide an awareness of the potential performance improvements of the process system and can be used to arrive at the change in process system value resulting from system modification. The model also arrives at the pay back of the modified system by taking reliability aspects also into consideration. It is also used to study the effect of various operating parameters on system value. The concept of breakeven availability is introduced and an algorithm for allocation of component reliabilities of the modified process system based on the breakeven system availability is also developed. The model was applied to various industrial situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Refiners today operate their equipment for prolonged periods without shutdown. This is primarily due to the increased pressures of the market resulting in extended shutdown-to-shutdown intervals. This places extreme demands on the reliability of the plant equipment. The traditional methods of reliability assurance, like Preventive Maintenance, Predictive Maintenance and Condition Based Maintenance become inadequate in the face of such demands. The alternate approaches to reliability improvement, being adopted the world over are implementation of RCFA programs and Reliability Centered Maintenance. However refiners and process plants find it difficult to adopt this standardized methodology of RCM mainly due to the complexity and the large amount of analysis that needs to be done, resulting in a long drawn out implementation, requiring the services of a number of skilled people. These results in either an implementation restricted to only few equipment or alternately, one that is non-standard. The paper presents the current models in use, the core requirements of a standard RCM model, the alternatives to classical RCM, limitations in the existing model, classical RCM and available alternatives to RCM and will then go on to present an ‗Accelerated‘ approach to RCM implementation, that, while ensuring close conformance to the standard, does not place a large burden on the implementers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to show that reliability analysis and its implementation will lead to an improved whole life performance of the building systems, and hence their life cycle costs (LCC). Design/methodology/approach – This paper analyses reliability impacts on the whole life cycle of building systems, and reviews the up-to-date approaches adopted in UK construction, based on questionnaires designed to investigate the use of reliability within the industry. Findings – Approaches to reliability design and maintainability design have been introduced from the operating environment level, system structural level and component level, and a scheduled maintenance logic tree is modified based on the model developed by Pride. Different stages of the whole life cycle of building services systems, reliability-associated factors should be considered to ensure the system's whole life performance. It is suggested that data analysis should be applied in reliability design, maintainability design, and maintenance policy development. Originality/value – The paper presents important factors in different stages of the whole life cycle of the systems, and reliability and maintainability design approaches which can be helpful for building services system designers. The survey from the questionnaires provides the designers with understanding of key impacting factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased penetration of generation and decentralised control are considered to be feasible and effective solution for reducing cost and emissions and hence efficiency associated with power generation and distribution. Distributed generation in combination with the multi-agent technology are perfect candidates for this solution. Pro-active and autonomous nature of multi-agent systems can provide an effective platform for decentralised control whilst improving reliability and flexibility of the grid.