761 resultados para reliability algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work is a part of the large project with purpose to qualify the Flash memory for automotive application using a standardized test and measurement flow. High memory reliability and data retention are the most critical parameters in this application. The current work covers the functional tests and data retention test. The purpose of the data retention test is to obtain the data retention parameters of the designed memory, i.e. the maximum time of information storage at specified conditions without critical charge leakage. For this purpose the charge leakage from the cells, which results in decrease of cells threshold voltage, was measured after a long-time hightemperature treatment at several temperatures. The amount of lost charge for each temperature was used to calculate the Arrhenius constant and activation energy for the discharge process. With this data, the discharge of the cells at different temperatures during long time can be predicted and the probability of data loss after years can be calculated. The memory chips, investigated in this work, were 0.035 μm CMOS Flash memory testchips, designed for further use in the Systems-on-Chips for automotive electronics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on a field-effect light emitting device based on silicon nanocrystals in silicon oxide deposited by plasma-enhanced chemical vapor deposition. The device shows high power efficiency and long lifetime. The power efficiency is enhanced up to 0.1 %25 by the presence of a silicon nitride control layer. The leakage current reduction induced by this nitride buffer effectively increases the power efficiency two orders of magnitude with regard to similarly processed devices with solely oxide. In addition, the nitride cools down the electrons that reach the polycrystalline silicon gate lowering the formation of defects, which significantly reduces the device degradation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective The present study evaluated the reliability of digital panoramic radiography in the diagnosis of carotid artery calcifications. Materials and Methods Thirty-five patients under high-risk for development of carotid artery calcifications who had digital panoramic radiography were referred to undergo ultrasonography. Thus, 70 arteries were assessed by both methods. The main parameters utilized to evaluate the panoramic radiography reliability in the diagnosis of carotid artery calcifications were accuracy, sensitivity, specificity and positive predictive value of this method as compared with ultrasonography. Additionally, the McNemar's test was utilized to verify whether there was a statistically significant difference between digital panoramic radiography and ultrasonography. Results Ultrasonography demonstrated carotid artery calcifications in 17 (48.57%) patients. Such individuals presented with a total of 29 (41.43%) carotid arteries affected by calcification. Radiography was accurate in 71.43% (n = 50) of cases evaluated. The degree of sensitivity of this method was 37.93%, specificity of 95.12% and positive predictive value of 84.61%. A statistically significant difference (p < 0.001) was observed between the methods evaluated in their capacity to diagnose carotid artery calcifications. Conclusion Digital panoramic radiography should not be indicated as a method of choice in the investigation of carotid artery calcifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work aims to analyze the possibilities of utilizing old crane driving AC induction motors in modern pulse-width-modulated variable frequency drives. Bearing currents and voltage stresses are the two main problems associated with modern IGBT inverters, and they may cause premature failure of an old induction motor. The origins of these two problems are studied. An analysis of the mechanism of bearing failure is proposed. Certain types of bearing currents are considered in detail. The most effective and economical means are chosen for bearing currents mitigation. Transient phenomena of cables and mechanism of over voltages occurring at motor terminals are studied in the work. The weakest places of the stator winding insulation system are shown and recommendations are given considering the mitigation of voltage stresses. Only the most appropriate and cost effective preventative methods are chosen for old motor drives. Rewinding of old motors is also considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distribution companies are facing numerous challenges in the near future. Regulation defines correlation between power quality and revenue cap. Companies have to take measures for reliability increase to successfully compete in modern conditions. Most of the failures seen by customers originate in medium voltage networks. Implementation of network automation is the very effective measure to reduce duration and number of outages, and consequently, outage costs. Topic of this diploma work is study of automation investments effect on outage costs and other reliability indices. Calculation model have been made to perform needed reliability calculations. Theoretical study of different automation scenarios has been done. Case feeder from actual distribution company has been studied and various renovation plans have been suggested. Network automation proved to be effective measure for increasing medium voltage network reliability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the literature on housing market areas, different approaches can be found to defining them, for example, using travel-to-work areas and, more recently, making use of migration data. Here we propose a simple exercise to shed light on which approach performs better. Using regional data from Catalonia, Spain, we have computed housing market areas with both commuting data and migration data. In order to decide which procedure shows superior performance, we have looked at uniformity of prices within areas. The main finding is that commuting algorithms present more homogeneous areas in terms of housing prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis concentrates on studying the operational disturbance behavior of machine tools integrated into FMS. Operational disturbances are short term failures of machine tools which are especially disruptive to unattended or unmanned operation of FMS. The main objective was to examine the effect of operational disturbances on reliability and operation time distribution for machine tools. The theoretical part of the thesis covers the fimdamentals of FMS relating to the subject of this study. The concept of FMS, its benefits and operator's role in FMS operation are reviewed. The importance of reliability is presented. The terms describing the operation time of machine tools are formed by adopting standards and references. The concept of failure and indicators describing reliability and operational performance for machine tools in FMSs are presented. The empirical part of the thesis describes the research methodology which is a combination of automated (ADC) and manual data collection. By using this methodology it is possible to have a complete view of the operation time distribution for studied machine tools. Data collection was carried out in four FMSs consisting of a total of 17 machine tools. Each FMS's basic features and the signals of ADC are described. The indicators describing the reliability and operation time distribution of machine tools were calculated according to collected data. The results showed that operational disturbances have a significant influence on machine tool reliability and operational performance. On average, an operational disturbance occurs every 8,6 hours of operation time and has a down time of 0,53 hours. Operational disturbances cause a 9,4% loss in operation time which is twice the amount of losses caused by technical failures (4,3%). Operational disturbances have a decreasing influence on the utilization rate. A poor operational disturbance behavior decreases the utilization rate. It was found that the features of a part family to be machined and the method technology related to it are defining the operational disturbance behavior of the machine tool. Main causes for operational disturbances were related to material quality variations, tool maintenance, NC program errors, ATC and machine tool control. Operator's role was emphasized. It was found that failure recording activity of the operators correlates with the utilization rate. The more precisely the operators record the failure, the higher is the utilization rate. Also the FMS organizations which record failures more precisely have fewer operational disturbances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Effective treatment for breast cancer requires accurate preoperative planning, developing and implementing a consistent definition of margin clearance, and using tools that provide detailed real-time intraoperative information on margin status. Intraoperative ultrasound (IOUS) may fulfil these requirements and may offer few advantages that other preoperative localization and intraoperative margin assessment techniques may notPurpose: The goal of the present work is to determine how accurate the intraoperative ultrasound should be to acquire complete surgical excision with negative histological margins in patients undergoing Breast Conservative SurgeryDesign: A diagnostic test study with a cross-sectional design carried out in a tertiary referral hospital in Girona within a Breast Pathology UnitParticipants: Women diagnosed with breast cancer undergoing a Breast Conservative Surgery in the Breast Pathology Unit at Hospital Universitari de Girona Dr. Josep Trueta

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this thesis was to research delivery reliability of mill business unit of a forest industry company, especially timely and quantitative accuracy of sales orders. Delivery reliability is an important factor of customer satisfaction, which has a great influence for success of a company. The secondary objective was to find out reasons for possible problems of delivery reliability and give propositions for improvable performances. The empirical part of the thesis based on reporting database of the forest industry company’s ERP-software and detailed information of the mill system. The delivery reliability results of the mill business unit were compared to delivery reliability of similar mill business unit inside the forest industry company. The research results expressed problems in the supply chain. The delivery reliability reporting should be also developed further. This would advance delivery reliability monitoring. The improvement propositions of the thesis were logistic operation mode estimation, particular benchmarking of the compared mill business unit and more detailed survey on production delivery reliability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.