840 resultados para operational reliability


Relevância:

40.00% 40.00%

Publicador:

Resumo:

BackgroundThe aim of the present study was to evaluate the feasibility of using a telephone survey in gaining an understanding of the possible herd and management factors influencing the performance (i.e. safety and efficacy) of a vaccine against porcine circovirus type 2 (PCV2) in a large number of herds and to estimate customers¿ satisfaction.ResultsDatasets from 227 pig herds that currently applied or have applied a PCV2 vaccine were analysed. Since 1-, 2- and 3-site production systems were surveyed, the herds were allocated in one of two subsets, where only applicable variables out of 180 were analysed. Group 1 was comprised of herds with sows, suckling pigs and nursery pigs, whereas herds in Group 2 in all cases kept fattening pigs. Overall 14 variables evaluating the subjective satisfaction with one particular PCV2 vaccine were comingled to an abstract dependent variable for further models, which was characterized by a binary outcome from a cluster analysis: good/excellent satisfaction (green cluster) and moderate satisfaction (red cluster). The other 166 variables comprised information about diagnostics, vaccination, housing, management, were considered as independent variables. In Group 1, herds using the vaccine due to recognised PCV2 related health problems (wasting, mortality or porcine dermatitis and nephropathy syndrome) had a 2.4-fold increased chance (1/OR) of belonging to the green cluster. In the final model for Group 1, the diagnosis of diseases other than PCV2, the reason for vaccine administration being other than PCV2-associated diseases and using a single injection of iron had significant influence on allocating into the green cluster (P¿<¿0.05). In Group 2, only unchanged time or delay of time of vaccination influenced the satisfaction (P¿<¿0.05).ConclusionThe methodology and statistical approach used in this study were feasible to scientifically assess ¿satisfaction¿, and to determine factors influencing farmers¿ and vets¿ opinion about the safety and efficacy of a new vaccine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Experiences in decentralized rural electrification programmes using solar home systems have suffered difficulties during the operation and maintenance phase, due in many cases, to the underestimation of the maintenance cost, because of the decentralized character of the activity, and also because the reliability of the solar home system components is frequently unknown. This paper reports on the reliability study and cost characterization achieved in a large photovoltaic rural electrification programme carried out in Morocco. The paper aims to determinate the reliability features of the solar systems, focusing in the in-field testing for batteries and photovoltaic modules. The degradation rates for batteries and PV modules have been extracted from the in-field experiments. On the other hand, the main costs related to the operation and maintenance activity have been identified with the aim of establishing the main factors that lead to the failure of the quality sustainability in many rural electrification programmes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Intervertebral cervical disc herniation (CDH) is a relatively common disorder that can coexist with degenerative changes to worsen cervicogenic myelopathy. Despite the frequent disc abnormalities found in asymptomatic populations, magnetic resonance imaging (MRI) is considered excellent at detecting cervical spine myelopathy (CSM) associated with disc abnormality. The objective of this study was to investigate the intra- and inter-observer reliability of MRI detection of CSM in subjects who also had co-existing intervertebral disc abnormalities. Materials and methods: Seven experienced radiologists reviewed twice the MRI of 10 patients with clinically and/or imaging determined myelopathy. MRI assessment was performed individually, with and without operational guidelines. A Fleiss Kappa statistic was used to evaluate the intra- and inter-observer agreement. Results: The study found high intra-observer percent agreement but relatively low Kappa values on selected variables. Inter-observer reliability was also low and neither observation was improved with operational guidelines. We believe that those low values may be associated with the base rate problem of Kappa. Conclusion: In conclusion, this study demonstrated high intra-observer percent agreement in MR examination for intervertebral disc abnormalities in patients with underlying cervical myelopathy, but differing levels of intra- and inter-observer Kappa agreement among seven radiologists. (c) 2007 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many business-oriented software applications are subject to frequent changes in requirements. This paper shows that, ceteris paribus, increases in the volatility of system requirements decrease the reliability of software. Further, systems that exhibit high volatility during the development phase are likely to have lower reliability during their operational phase. In addition to the typically higher volatility of requirements, end-users who specify the requirements of business-oriented systems are usually less technically oriented than people who specify the requirements of compilers, radar tracking systems or medical equipment. Hence, the characteristics of software reliability problems for business-oriented systems are likely to differ significantly from those of more technically oriented systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of Process Management has been used by managers and consultants that search for the improvement of both operational or managerial industrial processes. Its strength is in focusing on the external client and on the optimization of the internal process in order to fulfill their needs. By the time the needs of internal clients are being sought, a set of improvements takes place. The Taguchi method, because of its claim for knowledge share between design engineers and people engaged in the process, is a candidate for process management implementation. The objective of this paper is to propose that kind of application aiming for improvements related with reliability of results revealed by the robust design of Taguchi method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the reliability literature, maintenance time is usually ignored during the optimization of maintenance policies. In some scenarios, costs due to system failures may vary with time, and the ignorance of maintenance time will lead to unrealistic results. This paper develops maintenance policies for such situations where the system under study operates iteratively at two successive states: up or down. The costs due to system failure at the up state consist of both business losses & maintenance costs, whereas those at the down state only include maintenance costs. We consider three models: Model A, B, and C: Model A makes only corrective maintenance (CM). Model B performs imperfect preventive maintenance (PM) sequentially, and CM. Model C executes PM periodically, and CM; this PM can restore the system as good as the state just after the latest CM. The CM in this paper is imperfect repair. Finally, the impact of these maintenance policies is illustrated through numerical examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering a series representation of a coherent system using a shift transform of the components lifetime T-i, at its critical level Y-i, we study two problems. First, under such a shift transform, we analyse the preservation properties of the non-parametric distribution classes and secondly the association preserving property of the components lifetime under such transformations. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is done to solve two issues for Sayid Paper Mill Ltd Pakistan. Section one deals with a practical problem arise in SPM that is cutting a given set of raw paper rolls of known length and width, and a set of product paper rolls of known length (equal to the length of raw paper rolls) and width, practical cutting constraints on a single cutting machine, according to demand orders for all customers. To solve this problem requires to determine an optimal cutting schedule to maximize the overall cutting process profitability while satisfying all demands and cutting constraints. The aim of this part of thesis is to develop a mathematical model which solves this problem.Second section deals with a problem of delivering final product from warehouse to different destinations by finding shortest paths. It is an operational routing problem to decide the daily routes for sending trucks to different destination to deliver their final product. This industrial problem is difficult and includes aspect such as delivery to a single destination and multiple destinations with limited resources. The aim of this part of thesis is to develop a process which helps finding shortest path.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a mathematical model to analyze the impact of the installation and operation of dispersed generation units in power distribution systems is proposed. The main focus is to determine the trade-off between the reliability and operational costs of distribution networks when the operation of isolated areas is allowed. In order to increase the system operator revenue, an optimal power flow makes use of the different energy prices offered by the dispersed generation connected to the grid. Simultaneously, the type and location of the protective devices initially installed on the protection system are reconfigured in order to minimize the interruption and expenditure of adjusting the protection system to conditions imposed by the operation of dispersed units. The interruption cost regards the unsupplied energy to customers in secure systems but affected by the normal tripping of protective devices. Therefore, the tripping of fuses, reclosers, and overcurrent relays aims to protect the system against both temporary and permanent fault types. Additionally, in order to reduce the average duration of the system interruption experienced by customers, the isolated operation of dispersed generation is allowed by installing directional overcurrent relays with synchronized reclose capabilities. A 135-bus real distribution system is used in order to show the advantages of using the mathematical model proposed. © 1969-2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Operational capabilities são caracterizadas como um recurso interno da firma e fonte de vantagem competitiva. Porém, a literatura de estratégia de operações fornece uma definição constitutiva inadequada para as operational capabilities, desconsiderando a relativização dos diferentes contextos, a limitação da base empírica, e não explorando adequadamente a extensa literatura sobre práticas operacionais. Quando as práticas operacionais são operacionalizadas no ambiente interno da firma, elas podem ser incorporadas as rotinas organizacionais, e através do conhecimento tácito da produção se transformar em operational capabilities, criando assim barreiras à imitação. Apesar disso, poucos são os pesquisadores que exploram as práticas operacionais como antecedentes das operational capabilities. Baseado na revisão da literatura, nós investigamos a natureza das operational capabilities; a relação entre práticas operacionais e operational capabilities; os tipos de operational capabilities que são caracterizadas no ambiente interno da firma; e o impacto das operational capabilities no desempenho operacional. Nós conduzimos uma pesquisa de método misto. Na etapa qualitativa, nós conduzimos estudos de casos múltiplos com quatro firmas, duas multinacionais americanas que operam no Brasil, e duas firmas brasileiras. Nós coletamos os dados através de entrevistas semi-estruturadas com questões semi-abertas. Elas foram baseadas na revisão da literatura sobre práticas operacionais e operational capabilities. As entrevistas foram conduzidas pessoalmente. No total 73 entrevistas foram realizadas (21 no primeiro caso, 18 no segundo caso, 18 no terceiro caso, e 16 no quarto caso). Todas as entrevistas foram gravadas e transcritas literalmente. Nós usamos o sotware NVivo. Na etapa quantitativa, nossa amostra foi composta por 206 firmas. O questionário foi criado a partir de uma extensa revisão da literatura e também a partir dos resultados da fase qualitativa. O método Q-sort foi realizado. Um pré-teste foi conduzido com gerentes de produção. Foram realizadas medidas para reduzir Variância de Método Comum. No total dez escalas foram utilizadas. 1) Melhoria Contínua; 2) Gerenciamento da Informação; 3) Aprendizagem; 4) Suporte ao Cliente; 5) Inovação; 6) Eficiência Operacional; 7) Flexibilidade; 8) Customização; 9) Gerenciamento dos Fornecedores; e 10) Desempenho Operacional. Nós usamos análise fatorial confirmatória para confirmar a validade de confiabilidade, conteúdo, convergente, e discriminante. Os dados foram analisados com o uso de regressões múltiplas. Nossos principais resultados foram: Primeiro, a relação das práticas operacionais como antecedentes das operational capabilities. Segundo, a criação de uma tipologia dividida em dois construtos. O primeiro construto foi chamado de Standalone Capabilities. O grupo consiste de zero order capabilities tais como Suporte ao Cliente, Inovação, Eficiência Operacional, Flexibilidade, e Gerenciamento dos Fornecedores. Estas operational capabilities têm por objetivo melhorar os processos da firma. Elas têm uma relação direta com desempenho operacional. O segundo construto foi chamado de Across-the-Board Capabilities. Ele é composto por first order capabilities tais como Aprendizagem Contínua e Gerenciamento da Informação. Estas operational capabilities são consideradas dinâmicas e possuem o papel de reconfigurar as Standalone Capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerable attention has been given in the literature to identifying and describing the effective elements which positively affect the improvement of product reliability. These have been perceived by many as the 'state of the art' in the manufacturing industry. The applicability, diffusion and effectiveness of such methods and philosophies, as a means of systematically improving the reliability of a product, come in the main from case studies and single and infra-industry empirical studies. These studies have both been carried out within the wider context of quality assurance and management, and taking reliability as a discipline in its own right. However, it is somewhat of a surprise that there are no recently published findings or research studies on the adoption of these methods by the machine tool industry. This may lead one to construct several hypothesised paradigms: (a) that machine tool manufacturers compared to other industries, are slow to respond to propositions given in the literature by theorists or (b) this may indicate that a large proportion of the manufacturers make little use of the reliability improvement techniques as described in the literature, with the overall perception that they will not lead to any significant improvements? On the other hand, it is evident that hypothetical verification of the operational and engineering methods of reliability achievement and improvement adopted in the machine tool industry is less widely researched. Therefore, research into this area is needed in order to explore the 'state of the art' practice in the machine tool industry. This is in terms of the status, structure and activities of the operation of the reliability function. This paper outlines a research programme being conducted with the co-operation of a leading machine tool manufacturer, whose UK manufacturing plant produces in the main Vertical Machining Centres (VMCs) and is continuously undergoing incremental transitions in product reliability improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Road pricing has emerged as an effective means of managing road traffic demand while simultaneously raising additional revenues to transportation agencies. Research on the factors that govern travel decisions has shown that user preferences may be a function of the demographic characteristics of the individuals and the perceived trip attributes. However, it is not clear what are the actual trip attributes considered in the travel decision- making process, how these attributes are perceived by travelers, and how the set of trip attributes change as a function of the time of the day or from day to day. In this study, operational Intelligent Transportation Systems (ITS) archives are mined and the aggregated preferences for a priced system are extracted at a fine time aggregation level for an extended number of days. The resulting information is related to corresponding time-varying trip attributes such as travel time, travel time reliability, charged toll, and other parameters. The time-varying user preferences and trip attributes are linked together by means of a binary choice model (Logit) with a linear utility function on trip attributes. The trip attributes weights in the utility function are then dynamically estimated for each time of day by means of an adaptive, limited-memory discrete Kalman filter (ALMF). The relationship between traveler choices and travel time is assessed using different rules to capture the logic that best represents the traveler perception and the effect of the real-time information on the observed preferences. The impact of travel time reliability on traveler choices is investigated considering its multiple definitions. It can be concluded based on the results that using the ALMF algorithm allows a robust estimation of time-varying weights in the utility function at fine time aggregation levels. The high correlations among the trip attributes severely constrain the simultaneous estimation of their weights in the utility function. Despite the data limitations, it is found that, the ALMF algorithm can provide stable estimates of the choice parameters for some periods of the day. Finally, it is found that the daily variation of the user sensitivities for different periods of the day resembles a well-defined normal distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of vehicle speed for Structural Health Monitoring (SHM) of bridges under operational conditions are studied in this paper. The moving vehicle is modelled as a single degree oscillator traversing a damaged beam at a constant speed. The bridge is modelled as simply supported Euler-Bernoulli beam with a breathing crack. The breathing crack is treated as a nonlinear system with bilinear stiffness characteristics related to the opening and closing of crack. The unevenness of the bridge deck is modelled using road classification according to ISO 8606:1995(E). The stochastic description of the unevenness of the road surface is used as an aid to monitor the health of the structure in its operational condition. Numerical simulations are conducted considering the effects of changing vehicle speed with regards to cumulant based statistical damage detection parameters. The detection and calibration of damage at different levels is based on an algorithm dependent on responses of the damaged beam due to passages of the load. Possibilities of damage detection and calibration under benchmarked and non-benchmarked cases are considered. Sensitivity of calibration values is studied. The findings of this paper are important for establishing the expectations from different vehicle speeds on a bridge for damage detection purposes using bridge-vehicle interaction where the bridge does not need to be closed for monitoring. The identification of bunching of these speed ranges provides guidelines for using the methodology developed in the paper.