924 resultados para Technical Diagnostics
Resumo:
The operation of technical processes requires increasingly advanced supervision and fault diagnostics to improve reliability and safety. This paper gives an introduction to the field of fault detection and diagnostics and has short methods classification. Growth of complexity and functional importance of inertial navigation systems leads to high losses at the equipment refusals. The paper is devoted to the INS diagnostics system development, allowing identifying the cause of malfunction. The practical realization of this system concerns a software package, performing a set of multidimensional information analysis. The project consists of three parts: subsystem for analyzing, subsystem for data collection and universal interface for open architecture realization. For a diagnostics improving in small analyzing samples new approaches based on pattern recognition algorithms voting and taking into account correlations between target and input parameters will be applied. The system now is at the development stage.
Resumo:
In the light of emerging and overlooked infectious diseases and widespread drug resistance, diagnostics have become increasingly important in supporting surveillance, disease control and outbreak management programs. In many low-income countries the diagnostic service has been a neglected part of health care, often lacking quantity and quality or even non-existing at all. High-income countries have exploited few of their advanced technical abilities for the much-needed development of low-cost, rapid diagnostic tests to improve the accuracy of diagnosis and accelerate the start of appropriate treatment. As is now also recognized by World Healt Organization, investment in the development of affordable diagnostic tools is urgently needed to further our ability to control a variety of diseases that form a major threat to humanity. The Royal Tropical Institute's Department of Biomedical Research aims to contribute to the health of people living in the tropics. To this end, its multidisciplinary group of experts focuses on the diagnosis of diseases that are major health problems in low-income countries. In partnership we develop, improve and evaluate simple and cheap diagnostic tests, and perform epidemiological studies. Moreover, we advice and support others - especially those in developing countries - in their efforts to diagnose infectious diseases.
Resumo:
Over the last three decades, cytogenetic analysis of malignancies has become an integral part of disease evaluation and prediction of prognosis or responsiveness to therapy. In most diagnostic laboratories, conventional karyotyping, in conjunction with targeted fluorescence in situ hybridization analysis, is routinely performed to detect recurrent aberrations with prognostic implications. However, the genetic complexity of cancer cells requires a sensitive genome-wide analysis, enabling the detection of small genomic changes in a mixed cell population, as well as of regions of homozygosity. The advent of comprehensive high-resolution genomic tools, such as molecular karyotyping using comparative genomic hybridization or single-nucleotide polymorphism microarrays, has overcome many of the limitations of traditional cytogenetic techniques and has been used to study complex genomic lesions in, for example, leukemia. The clinical impact of the genomic copy-number and copy-neutral alterations identified by microarray technologies is growing rapidly and genome-wide array analysis is evolving into a diagnostic tool, to better identify high-risk patients and predict patients' outcomes from their genomic profiles. Here, we review the added clinical value of an array-based genome-wide screen in leukemia, and discuss the technical challenges and an interpretation workflow in applying arrays in the acquired cytogenetic diagnostic setting.
Resumo:
Tuberculosis is unique among the major infectious diseases in that it lacks accurate rapid point-of-care diagnostic tests. Failure to control the spread of tuberculosis is largely due to our inability to detect and treat all infectious cases of pulmonary tuberculosis in a timely fashion, allowing continued Mycobacterium tuberculosis transmission within communities. Currently recommended gold-standard diagnostic tests for tuberculosis are laboratory based, and multiple investigations may be necessary over a period of weeks or months before a diagnosis is made. Several new diagnostic tests have recently become available for detecting active tuberculosis disease, screening for latent M. tuberculosis infection, and identifying drug-resistant strains of M. tuberculosis. However, progress toward a robust point-of-care test has been limited, and novel biomarker discovery remains challenging. In the absence of effective prevention strategies, high rates of early case detection and subsequent cure are required for global tuberculosis control. Early case detection is dependent on test accuracy, accessibility, cost, and complexity, but also depends on the political will and funder investment to deliver optimal, sustainable care to those worst affected by the tuberculosis and human immunodeficiency virus epidemics. This review highlights unanswered questions, challenges, recent advances, unresolved operational and technical issues, needs, and opportunities related to tuberculosis diagnostics.
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
Fluorescence is a troublesome side effect in laboratory Raman studies on sulfuric acid solutions and aerosol particles. We performed experiments showing that organic matter induces fluorescence in H2SO4/H2O solutions. The intensity of the fluorescence signal appears to be almost independent of the concentration of the organic substances, but depends strongly on the sulfuric acid concentration. The ubiquity of organic substances in the atmosphere, their relatively high abundance, and the insensitivity of the fluorescence with respect to their concentrations will render most acidic natural aerosols subject to absorption and fluorescence, possibly influencing climate forcing. We show that, while fluorescence may in the future become a valuable tool of aerosol diagnostics, the concurrent absorption is too small to significantly affect the atmosphere's radiative balance.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
Resumo:
Mode of access: Internet.
Resumo:
Within Africa, the burden of heart failure is significant. This arises from the increase in cardiovascular disease and associated risk factors such as hypertension and diabetes, as well as causes of heart failure which are particular to sub-Saharan Africa, such as endomyocardial fibrosis. The lack of access to echocardiography and other imaging modalities, from a cost and technical perspective, combined with the predominantly rural nature of many countries with poor transport links, means that the vast majority of people never obtain an appropriate diagnosis. Similarly, research has been limited on the causes and treatment of heart failure in Africa and in particular endemic causes such as EMF and rheumatic heart disease. This review outlines the burden of heart failure in Africa and highlights the opportunity to expand diagnosis through the use of biomarkers, in particular natriuretic peptides. This builds on the success of point-of-care testing in human immunodeficiency virus and tuberculosis which have been extensively deployed in community settings in Africa.
Resumo:
The main theme covered by this dissertation is safety, set in the context of automatic machinery for secondary woodworking. The thesis describes in detail the project of a software module for CNC machining centers to protect the operator against hazards and to report errors in the machine safety management. Its design has been developed during an internship at SCM Group technical department. The development of the safety module is addressed step by step in a detailed way: first the company and the reference framework are introduced and then all the design choices are explained and justified. The discussion begins with a detailed analysis of the standards concerning woodworking machines and safety-related software. In this way, a clear and linear procedure can be established to develop and implement the internal structure of the module, its interface, and its application to specific safety-critical conditions. Afterwards, particular attention is paid to software testing, with the development of a comprehensive test procedure for the module, and to diagnostics, especially oriented towards signal management in IoT mode. Finally, the safety module is used as an anti-regression tool to initiate a design improvement of the machine control program. The refactoring steps performed in the process are explained in detail and the SCENT approach is introduced to test the result.
Resumo:
Context. Compact groups of galaxies are entities that have high densities of galaxies and serve as laboratories to study galaxy interactions, intergalactic star formation and galaxy evolution. Aims. The main goal of this study is to search for young objects in the intragroup medium of seven compact groups of galaxies: HCG 2, 7, 22, 23, 92, 100 and NGC 92 as well as to evaluate the stage of interaction of each group. Methods. We used Fabry-Perot velocity fields and rotation curves together with GALEX NUV and FUV images and optical R-band and HI maps. Results. (i) HCG 7 and HCG 23 are in early stages of interaction; (ii) HCG 2 and HCG 22 are mildly interacting; and (iii) HCG 92, HCG 100 and NGC 92 are in late stages of evolution. We find that all three evolved groups contain populations of young blue objects in the intragroup medium, consistent with ages < 100 Myr, of which several are younger than < 10 Myr. We also report the discovery of a tidal dwarf galaxy candidate in the tail of NGC 92. These three groups, besides containing galaxies that have peculiar velocity fields, also show extended HI tails. Conclusions. Our results indicate that the advanced stage of evolution of a group, together with the presence of intragroup HI clouds, may lead to star formation in the intragroup medium. A table containing all intergalactic HII regions and tidal dwarf galaxies confirmed to date is appended.
Resumo:
We present the first simultaneous measurements of the Thomson scattering and electron cyclotron emission radiometer diagnostics performed at TCABR tokamak with Alfven wave heating. The Thomson scattering diagnostic is an upgraded version of the one previously installed at the ISTTOK tokamak, while the electron cyclotron emission radiometer employs a heterodyne sweeping radiometer. For purely Ohmic discharges, the electron temperature measurements from both diagnostics are in good agreement. Additional Alfven wave heating does not affect the capability of the Thomson scattering diagnostic to measure the instantaneous electron temperature, whereas measurements from the electron cyclotron emission radiometer become underestimates of the actual temperature values. (C) 2010 American Institute of Physics. [doi:10.1063/1.3494379]
Resumo:
In this study, we evaluated alternative technical markers for the motion analysis of the pelvic segment. Thirteen subjects walked eight times while tri-dimensional kinematics were recorded for one stride of each trial. Five marker sets were evaluated, and we compared the tilt, obliquity, and rotation angles of the pelvis segment: (1) standard: markers at the anterior and posterior superior iliac spines (ASIS and PSIS); (2) markers at the PSIS and at the hip joint centers, HJCs (estimated by a functional method and described with clusters of markers at the thighs); (3) markers at the PSIS and HJCs (estimated by a predictive method and described with clusters of markers at the thighs); (4) markers at the PSIS and HJCs (estimated by a predictive method and described with skin-mounted markers at the thighs based on the Helen-Hayes marker set); (5) markers at the PSIS and at the iliac spines. Concerning the pelvic angles, evaluation of the alternative technical marker sets evinced that all marker sets demonstrated similar precision across trials (about 1 degrees) but different accuracies (ranging from 1 degrees to 3 degrees) in comparison to the standard marker set. We suggest that all the investigated marker sets are reliable alternatives to the standard pelvic marker set. (C) 2009 Elsevier Ltd. All rights reserved.