936 resultados para Reliability in automation
Resumo:
A manufactured aeration and nanofiltration MBR greywater system was tested during continuous operation at the University of Reading, to demonstrate reliability in delivery of high quality treated greywater. Its treatment performance was evaluated against British Standard criteria [BSI (Greywater Systems—Part 1 Code of Practice: BS8525-1:2010. BS Press, 2010); (Greywater Systems—Part 2 Domestic Greywater Treatment, Requirements and Methods: BS 8525-2:2011. BS Press, 2011)]. The low carbon greywater recycling technology produced excellent analytical results as well as consistency in performance. User acceptance of such reliably treated greywater was then evaluated through user perception studies. The results inform the potential supply of treated greywater to student accommodation. Out of 135 questionnaire replies, 95% demonstrated a lack of aversion in one or more attributes, to using treated, recycled greywater.
Resumo:
The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.
Resumo:
Background. There is emerging evidence that context is important for successful transfer of research knowledge into health care practice. The Alberta Context Tool (ACT) is a Canadian developed research-based instrument that assesses 10 modifiable concepts of organizational context considered important for health care professionals’ use of evidence. Swedish and Canadian health care have similarities in terms of organisational and professional aspects, suggesting that the ACT could be used for measuring context in Sweden. This paper reports on the translation of the ACT to Swedish and a testing of preliminary aspects of its validity, acceptability and reliability in Swedish elder care. Methods. The ACT was translated into Swedish and back-translated into English before being pilot tested in ten elder care facilities for response processes validity, acceptability and reliability (Cronbach’s alpha). Subsequently, further modification was performed. Results. In the pilot test, the nurses found the questions easy to respond to (52%) and relevant (65%), yet the questions’ clarity were mainly considered ‘neither clear nor unclear’ (52%). Missing data varied between 0 (0%) and 19 (12%) per item, the most common being 1 missing case per item (15 items). Internal consistency (Cronbach’s Alpha > .70) was reached for 5 out of 8 contextual concepts. Translation and back translation identified 21 linguistic- and semantic related issues and 3 context related deviations, resolved by developers and translators. Conclusion. Modifying an instrument is a detailed process, requiring time and consideration of the linguistic and semantic aspects of the instrument, and understanding of the context where the instrument was developed and where it is to be applied. A team, including the instrument’s developers, translators, and researchers is necessary to ensure a valid translation. This study suggests preliminary validity, reliability and acceptability evidence for the ACT when used with nurses in Swedish elder care.
Resumo:
Low flexibility and reliability in the operation of radial distribution networks make those systems be constructed with extra equipment as sectionalising switches in order to reconfigure the network, so the operation quality of the network can be improved. Thus, sectionalising switches are used for fault isolation and for configuration management (reconfiguration). Moreover, distribution systems are being impacted by the increasing insertion of distributed generators. Hence, distributed generation became one of the relevant parameters in the evaluation of systems reconfiguration. Distributed generation may affect distribution networks operation in various ways, causing noticeable impacts depending on its location. Thus, the loss allocation problem becomes more important considering the possibility of open access to the distribution networks. In this work, a graphic simulator for distribution networks with reconfiguration and loss allocation functions, is presented. Reconfiguration problem is solved through a heuristic methodology, using a robust power flow algorithm based on the current summation backward-forward technique, considering distributed generation. Four different loss allocation methods (Zbus, Direct Loss Coefficient, Substitution and Marginal Loss Coefficient) are implemented and compared. Results for a 32-bus medium voltage distribution network, are presented and discussed.
Resumo:
Regulatory authorities in many countries, in order to maintain an acceptable balance between appropriate customer service qualities and costs, are introducing a performance-based regulation. These regulations impose penalties, and in some cases rewards, which introduce a component of financial risk to an electric power utility due to the uncertainty associated with preserving a specific level of system reliability. In Brazil, for instance, one of the reliability indices receiving special attention by the utilities is the Maximum Continuous Interruption Duration per customer (MCID). This paper describes a chronological Monte Carlo simulation approach to evaluate probability distributions of reliability indices, including the MCID, and the corresponding penalties. In order to get the desired efficiency, modern computational techniques are used for modeling (UML -Unified Modeling Language) as well as for programming (Object- Oriented Programming). Case studies on a simple distribution network and on real Brazilian distribution systems are presented and discussed. © Copyright KTH 2006.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Implementação de duas arquiteturas microcontroladas tolerantes a falhas para controle da temperatura
Resumo:
Pós-graduação em Física - IGCE
Resumo:
With the growth of the demand on electric energy in the last decades, the urban distribution and transmission systems have experienced a bigger necessity to improve on the substations, the automation procedures and techniques on the operation maneuvers of such systems, in a sense that better attends the quality levels, availability, continuity and operational reliability. In this way, the objective of the present paper is to perform a study of protection and control on an electrical industrial system involving the procedures of digitizing and maneuvers automatism utilizing operational techniques and other pertinent information used in a typical high-voltage Industrial Electrical System. Analysis were made on short-circuits to specify the main components of the 138 [kV] substation, in addition, there were used digital MiCOM relays to make the protection of the present elements. With that, a program was developed to allow the user to monitor the condition of circuit-breakers through a supervision screen being able to simulate some kinds of faults, as well as observing the characteristics of each device. This way, the importance of having a fast and reliable system that ensures the equipment’s protection and the industrial process continuity due to faults on the electrical system is noticeable. It’s important to highlight that all this digitizing was mainly favored by the development of digital technology on the last years, mainly on microelectronics, also with the appearance of supervision gadgets allowing the development of complex systems in supervision and electric energy control
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
The Pulmonary Embolism Severity Index (PESI) is a validated clinical prognostic model for patients with acute pulmonary embolism (PE). Our goal was to assess the PESI's inter-rater reliability in patients diagnosed with PE. We prospectively identified consecutive patients diagnosed with PE in the emergency department of a Swiss teaching hospital. For all patients, resident and attending physician raters independently collected the 11 PESI variables. The raters then calculated the PESI total point score and classified patients into one of five PESI risk classes (I-V) and as low (risk classes I/II) versus higher-risk (risk classes III-V). We examined the inter-rater reliability for each of the 11 PESI variables, the PESI total point score, assignment to each of the five PESI risk classes, and classification of patients as low versus higher-risk using kappa ( ) and intra-class correlation coefficients (ICC). Among 48 consecutive patients with an objective diagnosis of PE, reliability coefficients between resident and attending physician raters were > 0.60 for 10 of the 11 variables comprising the PESI. The inter-rater reliability for the PESI total point score (ICC: 0.89, 95% CI: 0.81-0.94), PESI risk class assignment ( : 0.81, 95% CI: 0.66-0.94), and the classification of patients as low versus higher-risk ( : 0.92, 95% CI: 0.72-0.98) was near perfect. Our results demonstrate the high reproducibility of the PESI, supporting the use of the PESI for risk stratification of patients with PE.
Resumo:
PURPOSE: To determine sensitivity, specificity and inter-observer variability of different whole-body MRI (WB-MRI) sequences in patients with multiple myeloma (MM). METHODS AND MATERIALS: WB-MRI using a 1.5T MRI scanner was performed in 23 consecutive patients (13 males, 10 females; mean age 63+/-12 years) with histologically proven MM. All patients were clinically classified according to infiltration (low-grade, n=7; intermediate-grade, n=7; high-grade, n=9) and to the staging system of Durie and Salmon PLUS (stage I, n=12; stage II, n=4; stage III, n=7). The control group consisted of 36 individuals without malignancy (25 males, 11 females; mean age 57+/-13 years). Two observers independently evaluated the following WB-MRI sequences: T1w-TSE (T1), T2w-TIRM (T2), and the combination of both sequences, including a contrast-enhanced T1w-TSE with fat-saturation (T1+/-CE/T2). They had to determine growth patterns (focal and/or diffuse) and the MRI sequence that provided the highest confidence level in depicting the MM lesions. Results were calculated on a per-patient basis. RESULTS: Visual detection of MM was as follows: T1, 65% (sensitivity)/85% (specificity); T2, 76%/81%; T1+/-CE/T2, 67%/88%. Inter-observer variability was as follows: T1, 0.3; T2, 0.55; T1+/-CE/T2, 0.55. Sensitivity improved depending on infiltration grade (T1: 1=60%; 2=36%; 3=83%; T2: 1=70%; 2=71%; 3=89%; T1+/-CE/T2: 1=50%; 2=50%; 3=89%) and clinical stage (T1: 1=58%; 2=63%; 3=79%; T2: 1=58%; 2=88%; 3=100%; T1+/-CE/T2: 1=50%; 2=63%; 3=100%). T2w-TIRM sequences achieved the best reliability in depicting the MM lesions (65% in the mean of both readers). CONCLUSIONS: T2w-TIRM sequences achieved the highest level of sensitivity and best reliability, and thus might be valuable for initial assessment of MM. For an exact staging and grading the examination protocol should encompass unenhanced and enhanced T1w-MRI sequences, in addition to T2w-TIRM.