941 resultados para synsedimentary faults
Resumo:
One of the techniques used to detect faults in dynamic systems is analytical redundancy. An important difficulty in applying this technique to real systems is dealing with the uncertainties associated with the system itself and with the measurements. In this paper, this uncertainty is taken into account by the use of intervals for the parameters of the model and for the measurements. The method that is proposed in this paper checks the consistency between the system's behavior, obtained from the measurements, and the model's behavior; if they are inconsistent, then there is a fault. The problem of detecting faults is stated as a quantified real constraint satisfaction problem, which can be solved using the modal interval analysis (MIA). MIA is used because it provides powerful tools to extend the calculations over real functions to intervals. To improve the results of the detection of the faults, the simultaneous use of several sliding time windows is proposed. The result of implementing this method is semiqualitative tracking (SQualTrack), a fault-detection tool that is robust in the sense that it does not generate false alarms, i.e., if there are false alarms, they indicate either that the interval model does not represent the system adequately or that the interval measurements do not represent the true values of the variables adequately. SQualTrack is currently being used to detect faults in real processes. Some of these applications using real data have been developed within the European project advanced decision support system for chemical/petrochemical manufacturing processes and are also described in this paper
Resumo:
Process supervision is the activity focused on monitoring the process operation in order to deduce conditions to maintain the normality including when faults are present Depending on the number/distribution/heterogeneity of variables, behaviour situations, sub-processes, etc. from processes, human operators and engineers do not easily manipulate the information. This leads to the necessity of automation of supervision activities. Nevertheless, the difficulty to deal with the information complicates the design and development of software applications. We present an approach called "integrated supervision systems". It proposes multiple supervisors coordination to supervise multiple sub-processes whose interactions permit one to supervise the global process
Resumo:
This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
Esta investigación se desarrolla, bajo una hipótesis, la cual se busca comprobar, analizando la evidencia encontrada en documentos Teóricos investigativos de autores expertos y especialistas en el tema y en publicaciones de la prensa nacional además de las leyes que corresponden a las dos reformas de la salud la 1122 de 2007 y la 1438 de 2011. La hipótesis planteada para esta investigación es que “mientras los nuevos planteamientos para la reforma de la salud no incluyan dentro de su propuesta una solución evidente a los principales problemas encontrados en el sistema de salud actual y no establezca un control fiscal y una regulación verdadera sobre el manejo de los fondos del sistema de salud; Los principios de universalidad, eficiencia, integralidad, libre escogencia, competencia sana y calidad no serán cumplidos y por lo tanto el pueblo Colombiano no podrá ejercer su derecho a la salud como lo plantea la Constitución Política de Colombia de 1991 y la ley 100 de 1993. Para comprobar la hipótesis se han seleccionado como documentos a analizar, tres publicaciones de la prensa nacional, cuatro de autores expertos y especializados en el tema y tres documentos adicionales que corresponden a las dos reformas realizadas al sistema de salud en los años 2007 y 2011 y a la nueva propuesta radicada por el gobierno nacional a inicios de este año. Es de suma importancia conocer las apreciaciones de los autores sobre el grupo de variables que se utilizarán para desarrollar este estudio. Estas variables pertenecen a aquellas que influencian el equilibrio financiero y que adicionalmente afectan directamente a la población impactando su bienestar y su calidad de vida. Entre esas variables se destacan (i) aspectos demográficos y de la fuerza laboral, (ii) aspectos económicos como los niveles de ingreso, salarios y empleo, (iii) cobertura del sistema de salud, (iv) calidad y acceso a los servicios del sistema, (v) duplicidad del gasto (vi) flujo de los recursos del sistema de salud (problemas institucionales), (vii) Gasto en salud y estabilidad financiera y (viii) regulación financiera. El Formato utilizado para la comparación análisis y síntesis de los documentos teóricos investigativos y de las publicaciones de la prensa nacional, consta de tres cuerpos. El primero, contiene las características relacionadas con tiempo y espacio de la publicación. El segundo, hace énfasis en el contenido y en los temas y variables de interés para el desarrollo de la investigación. Y el tercero hace énfasis en el contenido y en los temas y variables de interés para el desarrollo de la investigación. Al hacer el análisis, síntesis y comparación de estos artículos se resolverán algunos interrogantes que pueden llevarnos a comprobar la hipótesis como los son:¿Cuáles han sido los principales logros en salud con el sistema actual?, ¿Cuáles han sido las principales fallas o problemas en el sistema actual de salud?, ¿Hay un buen manejo de los fondos destinados al sistema de Salud actual?, ¿La gestión financiera del Sistema de Salud Colombiano, permite que este sea un sistema de salud sostenible y perdurable para todos los colombianos?. Adicionalmente, existen otros interrogantes a destacar como lo son ¿En qué consiste la nueva reforma de Salud propuesta para el 2013?, ¿Los cambios planteados en la última propuesta para hacer una reforma al sistema de salud, realmente conducen a un avance o dejan de lado los principales problemas de la seguridad social en Colombia?, ¿La nueva propuesta para la reforma de la salud busca lograr una mejora radical en la gestión financiera del Sistema de Salud Colombiano?, ¿Cuál es la percepción de los principales autores, especialistas en el tema en cuanto a la erradicación total de los problemas más álgidos en el actual Sistema de Salud Colombiano, con la nueva propuesta para la reforma de la salud para el 2013? Este trabajo de investigación surgió debido a la radicación de una nueva propuesta para la reforma a la salud y la polémica que se ha generado alrededor de esta. Veinte años después de la aprobación de la ley 100, se han identificado logros importantes principalmente en cuanto a cobertura; lastimosamente, actualmente se han manifestado problemas financieros y de liquidez, a pesar de lo cual, vale la pena destacar que el sistema fue rentable e incluso generó excedentes financieros en su operación, durante sus primeros 10 años de funcionamiento. Ahora, según la evidencia teórica investigativa y de la prensa nacional, se determinará si la nueva propuesta para la reforma de la salud es una buena opción para el país.
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.
Resumo:
El test de circuits és una fase del procés de producció que cada vegada pren més importància quan es desenvolupa un nou producte. Les tècniques de test i diagnosi per a circuits digitals han estat desenvolupades i automatitzades amb èxit, mentre que aquest no és encara el cas dels circuits analògics. D'entre tots els mètodes proposats per diagnosticar circuits analògics els més utilitzats són els diccionaris de falles. En aquesta tesi se'n descriuen alguns, tot analitzant-ne els seus avantatges i inconvenients. Durant aquests últims anys, les tècniques d'Intel·ligència Artificial han esdevingut un dels camps de recerca més importants per a la diagnosi de falles. Aquesta tesi desenvolupa dues d'aquestes tècniques per tal de cobrir algunes de les mancances que presenten els diccionaris de falles. La primera proposta es basa en construir un sistema fuzzy com a eina per identificar. Els resultats obtinguts son força bons, ja que s'aconsegueix localitzar la falla en un elevat tant percent dels casos. Per altra banda, el percentatge d'encerts no és prou bo quan a més a més s'intenta esbrinar la desviació. Com que els diccionaris de falles es poden veure com una aproximació simplificada al Raonament Basat en Casos (CBR), la segona proposta fa una extensió dels diccionaris de falles cap a un sistema CBR. El propòsit no és donar una solució general del problema sinó contribuir amb una nova metodologia. Aquesta consisteix en millorar la diagnosis dels diccionaris de falles mitjançant l'addició i l'adaptació dels nous casos per tal d'esdevenir un sistema de Raonament Basat en Casos. Es descriu l'estructura de la base de casos així com les tasques d'extracció, de reutilització, de revisió i de retenció, fent èmfasi al procés d'aprenentatge. En el transcurs del text s'utilitzen diversos circuits per mostrar exemples dels mètodes de test descrits, però en particular el filtre biquadràtic és l'utilitzat per provar les metodologies plantejades, ja que és un dels benchmarks proposats en el context dels circuits analògics. Les falles considerades son paramètriques, permanents, independents i simples, encara que la metodologia pot ser fàcilment extrapolable per a la diagnosi de falles múltiples i catastròfiques. El mètode es centra en el test dels components passius, encara que també es podria extendre per a falles en els actius.
Resumo:
El desalineamiento temporal es la incorrespondencia de dos señales debido a una distorsión en el eje temporal. La Detección y Diagnóstico de Fallas (Fault Detection and Diagnosis-FDD) permite la detección, el diagnóstico y la corrección de fallos en un proceso. La metodología usada en FDD está dividida en dos categorías: técnicas basadas en modelos y no basadas en modelos. Esta tesis doctoral trata sobre el estudio del efecto del desalineamiento temporal en FDD. Nuestra atención se enfoca en el análisis y el diseño de sistemas FDD en caso de problemas de comunicación de datos, como retardos y pérdidas. Se proponen dos técnicas para reducir estos problemas: una basada en programación dinámica y la otra en optimización. Los métodos propuestos han sido validados sobre diferentes sistemas dinámicos: control de posición de un motor de corriente continua, una planta de laboratorio y un problema de sistemas eléctricos conocido como hueco de tensión.
Resumo:
The van der Heijden Studies Database has been reviewed to identify 'Draw Studies' with sub-7-man positions in the main line which are not draws. The data-mining method is described. Some 1,500 studies were faulted, 700 for the first time: 14 of the more interesting faults are highlighted and discussed.
Resumo:
We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.
Resumo:
Building services are worth about 2% GDP and are essential for the effective and efficient operations of the building. It is increasingly recognised that the value of a building is related to the way it supports the client organisation’s ongoing business operations. Building services are central to the functional performance of buildings and provide the necessary conditions for health, well-being, safety and security of the occupants. They frequently comprise several technologically distinct sub-systems and their design and construction requires the involvement of numerous disciplines and trades. Designers and contractors working on the same project are frequently employed by different companies. Materials and equipment is supplied by a diverse range of manufacturers. Facilities managers are responsible for operation of the building service in use. The coordination between these participants is crucially important to achieve optimum performance, but too often is neglected. This leaves room for serious faults. The need for effective integration is important. Modern technology offers increasing opportunities for integrated personal-control systems for lighting, ventilation and security as well as interoperability between systems. Opportunities for a new mode of systems integration are provided by the emergence of PFI/PPP procurements frameworks. This paper attempts to establish how systems integration can be achieved in the process of designing, constructing and operating building services. The essence of the paper therefore is to envisage the emergent organisational responses to the realisation of building services as an interactive systems network.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.
Resumo:
This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.
Resumo:
In this work, a fault-tolerant control scheme is applied to a air handling unit of a heating, ventilation and air-conditioning system. Using the multiple-model approach it is possible to identify faults and to control the system under faulty and normal conditions in an effective way. Using well known techniques to model and control the process, this work focuses on the importance of the cost function in the fault detection and its influence on the reconfigurable controller. Experimental results show how the control of the terminal unit is affected in the presence a fault, and how the recuperation and reconfiguration of the control action is able to deal with the effects of faults.
Resumo:
An interconnection network with n nodes is four-pancyclic if it contains a cycle of length l for each integer l with 4 <= l <= n. An interconnection network is fault-tolerant four-pancyclic if the surviving network is four-pancyclic in the presence of faults. The fault-tolerant four-pancyclicity of interconnection networks is a desired property because many classical parallel algorithms can be mapped onto such networks in a communication-efficient fashion, even in the presence of failing nodes or edges. Due to some attractive properties as compared with its hypercube counterpart of the same size, the Mobius cube has been proposed as a promising candidate for interconnection topology. Hsieh and Chen [S.Y. Hsieh, C.H. Chen, Pancyclicity on Mobius cubes with maximal edge faults, Parallel Computing, 30(3) (2004) 407-421.] showed that an n-dimensional Mobius cube is four-pancyclic in the presence of up to n-2 faulty edges. In this paper, we show that an n-dimensional Mobius cube is four-pancyclic in the presence of up to n-2 faulty nodes. The obtained result is optimal in that, if n-1 nodes are removed, the surviving network may not be four-pancyclic. (C) 2005 Elsevier B.V. All rights reserved.