972 resultados para loss detection
Resumo:
Loss-of-mains protection is an important component of the protection systems of embedded generation. The role of loss-of-mains is to disconnect the embedded generator from the utility grid in the event that connection to utility dispatched generation is lost. This is necessary for a number of reasons, including the safety of personnel during fault restoration and the protection of plant against out-of-synchronism reclosure to the mains supply. The incumbent methods of loss-of-mains protection were designed when the installed capacity of embedded generation was low, and known problems with nuisance tripping of the devices were considered acceptable because of the insignificant consequence to system operation. With the dramatic increase in the installed capacity of embedded generation over the last decade, the limitations of current islanding detection methods are no longer acceptable. This study describes a new method of loss-of-mains protection based on phasor measurement unit (PMU) technology, specifically using a low cost PMU device of the authors' design which has been developed for distribution network applications. The proposed method addresses the limitations of the incumbent methods, providing a solution that is free of nuisance tripping and has a zero non-detection zone. This system has been tested experimentally and is shown to be practical, feasible and effective. Threshold settings for the new method are recommended based on data acquired from both the Great Britain and Ireland power systems.
Resumo:
A simple derivatization methodology is shown to extend the application of surface-enhanced Raman spectroscopy (SERS) to the detection of trace concentration of contaminants in liquid form. Normally in SERS the target analyte species is already present in the molecular form in which it is to be detected and is extracted from solution to occupy sites of enhanced electromagnetic field on the substrate by means of chemisorption or drop-casting and subsequent evaporation of the solvent. However, these methods are very ineffective for the detection of low concentrations of contaminant in liquid form because the target (ionic) species (a) exhibits extremely low occupancy of enhancing surface sites in the bulk liquid environment and (b) coevaporates with the solvent. In this study, the target analyte species (acid) is detected via its solid derivative (salt) offering very significant enhancement of the SERS signal because of preferential deposition of the salt at the enhancing surface but without loss of chemical discrimination. The detection of nitric acid and sulfuric acid is demonstrated down to 100 ppb via reaction with ammonium hydroxide to produce the corresponding ammonium salt. This yields an improvement of ∼4 orders of magnitude in the low-concentration detection limit compared with liquid phase detection.
Resumo:
Background
Diabetic macular oedema (DMO) is a thickening of the central retina, or the macula, and is associated with long-term visual loss in people with diabetic retinopathy (DR). Clinically significant macular oedema (CSMO) is the most severe form of DMO. Almost 30 years ago, the Early Treatment Diabetic Retinopathy Study (ETDRS) found that CSMO, diagnosed by means of stereoscopic fundus photography, leads to moderate visual loss in one of four people within three years. It also showed that grid or focal laser photocoagulation to the macula halves this risk. Recently, intravitreal injection of antiangiogenic drugs has also been used to try to improve vision in people with macular oedema due to DR.Optical coherence tomography (OCT) is based on optical reflectivity and is able to image retinal thickness and structure producing cross-sectional and three-dimensional images of the central retina. It is widely used because it provides objective and quantitative assessment of macular oedema, unlike the subjectivity of fundus biomicroscopic assessment which is routinely used by ophthalmologists instead of photography. Optical coherence tomography is also used for quantitative follow-up of the effects of treatment of CSMO.
Objectives
To determine the diagnostic accuracy of OCT for detecting DMO and CSMO, defined according to ETDRS in 1985, in patients referred to ophthalmologists after DR is detected. In the update of this review we also aimed to assess whether OCT might be considered the new reference standard for detecting DMO.
Search methods
We searched the Cochrane Database of Systematic Reviews (CDSR), the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA) and the NHS Economic Evaluation Database (NHSEED) (The Cochrane Library 2013, Issue 5), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to June 2013), EMBASE (January 1950 to June 2013), Web of Science Conference Proceedings Citation Index - Science (CPCI-S) (January 1990 to June 2013), BIOSIS Previews (January 1969 to June 2013), MEDION and the Aggressive Research Intelligence Facility database (ARIF). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 25 June 2013. We checked bibliographies of relevant studies for additional references.
Selection Criteria
We selected studies that assessed the diagnostic accuracy of any OCT model for detecting DMO or CSMO in patients with DR who were referred to eye clinics. Diabetic macular oedema and CSMO were diagnosed by means of fundus biomicroscopy by ophthalmologists or stereophotography by ophthalmologists or other trained personnel.
Data collection and analysis
Three authors independently extracted data on study characteristics and measures of accuracy. We assessed data using random-effects hierarchical sROC meta-analysis models.
Main results
We included 10 studies (830 participants, 1387 eyes), published between 1998 and 2012. Prevalence of CSMO was 19% to 65% (median 50%) in nine studies with CSMO as the target condition. Study quality was often unclear or at high risk of bias for QUADAS 2 items, specifically regarding study population selection and the exclusion of participants with poor quality images. Applicablity was unclear in all studies since professionals referring patients and results of prior testing were not reported. There was a specific 'unit of analysis' issue because both eyes of the majority of participants were included in the analyses as if they were independent.In nine studies providing data on CSMO (759 participants, 1303 eyes), pooled sensitivity was 0.78 (95% confidence interval (CI) 0.72 to 0.83) and specificity was 0.86 (95% CI 0.76 to 0.93). The median central retinal thickness cut-off we selected for data extraction was 250 µm (range 230 µm to 300 µm). Central CSMO was the target condition in all but two studies and thus our results cannot be applied to non-central CSMO.Data from three studies reporting accuracy for detection of DMO (180 participants, 343 eyes) were not pooled. Sensitivities and specificities were about 0.80 in two studies and were both 1.00 in the third study.Since this review was conceived, the role of OCT has changed and has become a key ingredient of decision-making at all levels of ophthalmic care in this field. Moreover, disagreements between OCT and fundus examination are informative, especially false positives which are referred to as subclinical DMO and are at higher risk of developing clinical CSMO.
Authors' conclusions
Using retinal thickness thresholds lower than 300 µm and ophthalmologist's fundus assessment as reference standard, central retinal thickness measured with OCT was not sufficiently accurate to diagnose the central type of CSMO in patients with DR referred to retina clinics. However, at least OCT false positives are generally cases of subclinical DMO that cannot be detected clinically but still suffer from increased risk of disease progression. Therefore, the increasing availability of OCT devices, together with their precision and the ability to inform on retinal layer structure, now make OCT widely recognised as the new reference standard for assessment of DMO, even in some screening settings. Thus, this review will not be updated further.
Resumo:
We present the GALEX detection of a UV burst at the time of explosion of an optically normal supernova (SN) IIP (PS1-13arp) from the Pan-STARRS1 survey at z = 0.1665. The temperature and luminosity of the UV burst match the theoretical predictions for shock breakout in a red supergiant (RSG), but with a duration a factor of similar to 50 longer than expected. We compare the NUV light curve of PS1-13arp to previous GALEX detections of SNe IIP and find clear distinctions that indicate that the UV emission is powered by shock breakout, and not by the subsequent cooling envelope emission previously detected in these systems. We interpret the similar to 1 day duration of the UV signal with a shock breakout in the wind of an RSG with a pre-explosion mass-loss rate of similar to 10(-3) M-circle dot yr(-1). This mass-loss rate is enough to prolong the duration of the shock breakout signal, but not enough to produce an excess in the optical plateau light curve or narrow emission lines powered by circumstellar interaction. This detection of nonstandard, potentially episodic high mass loss in an RSG SN progenitor has favorable consequences for the prospects of future wide-field UV surveys to detect shock breakout directly in these systems, and provide a sensitive probe of the pre-explosion conditions of SN progenitors.
Resumo:
Multicarrier Index Keying (MCIK) is a recently developed technique that modulates subcarriers but also indices of the subcarriers. In this paper a novel low-complexity detection scheme of subcarrier indices is proposed for an MCIK system and addresses a substantial reduction in complexity over the optimalmaximum likelihood (ML) detection. For the performance evaluation, a closed-form expression for the pairwise error probability (PEP) of an active subcarrier index, and a tight approximation of the average PEP of multiple subcarrier indices are derived in closed-form. The theoretical outcomes are validated usingsimulations, at a difference of less than 0.1dB. Compared to the optimal ML, the proposed detection achieves a substantial reduction in complexity with small loss in error performance (<= 0.6dB).
Resumo:
This paper proposes a method for the detection and classification of multiple events in an electrical power system in real-time, namely; islanding, high frequency events (loss of load) and low frequency events (loss of generation). This method is based on principal component analysis of frequency measurements and employs a moving window approach to combat the time-varying nature of power systems, thereby increasing overall situational awareness of the power system. Numerical case studies using both real data, collected from the UK power system, and simulated case studies, constructed using DigSilent PowerFactory, for islanding events, as well as both loss of load and generation dip events, are used to demonstrate the reliability of the proposed method.
Resumo:
The non-technical loss is not a problem with trivial solution or regional character and its minimization represents the guarantee of investments in product quality and maintenance of power systems, introduced by a competitive environment after the period of privatization in the national scene. In this paper, we show how to improve the training phase of a neural network-based classifier using a recently proposed meta-heuristic technique called Charged System Search, which is based on the interactions between electrically charged particles. The experiments were carried out in the context of non-technical loss in power distribution systems in a dataset obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others natureinspired optimization techniques for training neural networks. Thus, it is possible to improve some applications on Smart Grids.
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.
Resumo:
ABSTRACT: Background. In India, prevalence rates of dementia and prodromal amnestic Mild Cognitive Impairment (MCI) are 3.1% and 4.3% respectively. Most Indians refer to the full spectrum of cognitive disorders simply as ‘memory loss.’ Barring prevention or cure, these conditions will rise rapidly with population aging. Evidence-based policies and practices can improve the lives of affected individuals and their caregivers, but will require timely and sustained uptake. Objectives. Framed by social cognitive theories of health behavior, this study explores the knowledge, attitudes and practices concerning cognitive impairment and related service use by older adults who screen positive for MCI, their primary caregivers, and health providers. Methods. I used the Montreal Cognitive Assessment to screen for cognitive impairment in memory camps in Mumbai. To achieve sampling diversity, I used maximum variation sampling. Ten adults aged 60+ who had no significant functional impairment but screened positive for MCI and their caregivers participated in separate focus groups. Four other such dyads and six doctors/ traditional healers completed in-depth interviews. Data were translated from Hindi or Marathi to English and analyzed in Atlas.ti using Framework Analysis. Findings. Knowledge and awareness of cognitive impairment and available resources were very low. Physicians attributed the condition to disease-induced pathology while lay persons blamed brain malfunction due to normal aging. Main attitudes were that this condition is not a disease, is not serious and/or is not treatable, and that it evokes stigma toward and among impaired persons, their families and providers. Low knowledge and poor attitudes impeded help-seeking. Conclusions. Cognitive disorders of aging will take a heavy toll on private lives and public resources in developing countries. Early detection, accurate diagnosis, systematic monitoring and quality care are needed to compress the period of morbidity and promote quality of life. Key stakeholders provide essential insights into how scientific and indigenous knowledge and sociocultural attitudes affect use and provision of resources.
Resumo:
Purpose: To investigate the accuracy of 4 clinical instruments in the detection of glaucomatous damage. Methods: 102 eyes of 55 test subjects (Age mean = 66.5yrs, range = [39; 89]) underwent Heidelberg Retinal Tomography (HRTIII), (disc area<2.43); and standard automated perimetry (SAP) using Octopus (Dynamic); Pulsar (TOP); and Moorfields Motion Displacement Test (MDT) (ESTA strategy). Eyes were separated into three groups 1) Healthy (H): IOP<21mmHg and healthy discs (clinical examination), 39 subjects, 78 eyes; 2) Glaucoma suspect (GS): Suspicious discs (clinical examination), 12 subjects, 15 eyes; 3) Glaucoma (G): progressive structural or functional loss, 14 subjects, 20 eyes. Clinical diagnostic precision was examined using the cut-off associated with the p<5% normative limit of MD (Octopus/Pulsar), PTD (MDT) and MRA (HRT) analysis. The sensitivity, specificity and accuracy were calculated for each instrument. Results: See table Conclusions: Despite the advantage of defining glaucoma suspects using clinical optic disc examination, the HRT did not yield significantly higher accuracy than functional measures. HRT, MDT and Octopus SAP yielded higher accuracy than Pulsar perimetry, although results did not reach statistical significance. Further studies are required to investigate the structure-function correlations between these instruments.
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
In recent years,photonics has emerged as an essential technology related to such diverse fields like laser technology,fiber optics,communication,optical signal processing,computing,entertainment,consumer electronics etc.Availabilities of semiconductor lasers and low loss fibers have also revolutionized the field of sensor technology including telemetry. There exist fiber optic sensors which are sensitive,reliable.light weight and accurate devices which find applications in wide range of areas like biomedicine,aviation,surgery,pollution monitoring etc.,apart from areas in basic sciences.The present thesis deals with the design,fabrication and characterization of a variety of cost effective and sensitive fiber optic sensors for the trace detetction of certain environment pollutants in air and water.The sensor design is carried out using the techniques like evanescent waves,micro bending and long period gratings.
Resumo:
The recent discovery of the contribution of alpha synuclein in the auditory system prompted further investigation of its functional role. Auditory brainstem response (ABR) and gap detection testing were completed on wild-type and transgenic M83 mice to assess the role of alpha synuclein in noise-induced hearing loss and central auditory function.
Resumo:
The General Packet Radio Service (GPRS) has been developed for the mobile radio environment to allow the migration from the traditional circuit switched connection to a more efficient packet based communication link particularly for data transfer. GPRS requires the addition of not only the GPRS software protocol stack, but also more baseband functionality for the mobile as new coding schemes have be en defined, uplink status flag detection, multislot operation and dynamic coding scheme detect. This paper concentrates on evaluating the performance of the GPRS coding scheme detection methods in the presence of a multipath fading channel with a single co-channel interferer as a function of various soft-bit data widths. It has been found that compressing the soft-bit data widths from the output of the equalizer to save memory can influence the likelihood decision of the coding scheme detect function and hence contribute to the overall performance loss of the system. Coding scheme detection errors can therefore force the channel decoder to either select the incorrect decoding scheme or have no clear decision which coding scheme to use resulting in the decoded radio block failing the block check sequence and contribute to the block error rate. For correct performance simulation, the performance of the full coding scheme detection must be taken into account.
Resumo:
This paper specifically examines the implantation of a microelectrode array into the median nerve of the left arm of a healthy male volunteer. The objective was to establish a bi-directional link between the human nervous system and a computer, via a unique interface module. This is the first time that such a device has been used with a healthy human. The aim of the study was to assess the efficacy, compatibility, and long term operability of the neural implant in allowing the subject to perceive feedback stimulation and for neural activity to be detected and processed such that the subject could interact with remote technologies. A case study demonstrating real-time control of an instrumented prosthetic hand by means of the bi-directional link is given. The implantation did not result in infection, and scanning electron microscope images of the implant post extraction have not indicated significant rejection of the implant by the body. No perceivable loss of hand sensation or motion control was experienced by the subject while the implant was in place, and further testing of the subject following the removal of the implant has not indicated any measurable long term defects. The implant was extracted after 96 days. Copyright © 2004 John Wiley & Sons, Ltd.