930 resultados para Line and edge detection
Resumo:
Two proton accelerators have been recently put in operation in Bern: an 18 MeV cyclotron and a 2 MeV RFQ linac. The commercial IBA 18/18 cyclotron, equipped with a specifically conceived 6 m long external beam line ending in a separate bunker, will provide beams for routine 18-F and other PET radioisotope production as well as for novel detector, radiation biophysics, radioprotection, radiochemistry and radiopharmacy developments. The accelerator is embedded into a complex building hosting two physics laboratories and four Good Manufacturing Practice (GMP) laboratories. This project is the result of a successful collaboration between the Inselspital, the University of Bern and private investors, aiming at the constitution of a combined medical and research centre able to provide the most cutting-edge technologies in medical imaging and cancer radiation therapy. The cyclotron is complemented by the RFQ with the primary goals of elemental analysis via Particle Induced Gamma Emission (PIGE), and the detection of potentially dangerous materials with high nitrogen content using the Gamma-Resonant Nuclear Absorption (GRNA) technique. In this context, beam instrumentation devices have been developed, in particular an innovative beam profile monitor based on doped silica fibres and a setup for emittance measurements using the pepper-pot technique. On this basis, the establishment of a proton therapy centre on the campus of the Inselspital is in the phase of advanced study.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
Background: The Melbourne Edge Test (MET) is a portable forced-choice edge detection contrast sensitivity (CS) test. The original externally illuminated paper test has been superseded by a backlit version. The aim of this study was to establish normative values for age and to assess change with visual impairment. Method: The MET was administered to 168 people with normal vision (18-93 years old) and 93 patients with visual impairment (39-97 years old). Distance visual acuity (VA) was measured with a log MAR chart. Results: In those eyes without disease, MET CS was stable until the age of 50 years (23.8 ± .7 dB) after which it decreased at a rate of ≈1.5 dB per decade. Compared with normative values, people with low vision were found to have significantly reduced CS, which could not be totally accounted for by reduced VA. Conclusions: The MET provides a quick and easy measure of CS, which highlights a reduction in visual function that may not be detectable using VA measurements. © 2004 The College of Optometrists.
Resumo:
AIMS: Mutation detection accuracy has been described extensively; however, it is surprising that pre-PCR processing of formalin-fixed paraffin-embedded (FFPE) samples has not been systematically assessed in clinical context. We designed a RING trial to (i) investigate pre-PCR variability, (ii) correlate pre-PCR variation with EGFR/BRAF mutation testing accuracy and (iii) investigate causes for observed variation. METHODS: 13 molecular pathology laboratories were recruited. 104 blinded FFPE curls including engineered FFPE curls, cell-negative FFPE curls and control FFPE tissue samples were distributed to participants for pre-PCR processing and mutation detection. Follow-up analysis was performed to assess sample purity, DNA integrity and DNA quantitation. RESULTS: Rate of mutation detection failure was 11.9%. Of these failures, 80% were attributed to pre-PCR error. Significant differences in DNA yields across all samples were seen using analysis of variance (p
Resumo:
[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.
Resumo:
In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.
Resumo:
This thesis project aims to the development of an algorithm for the obstacle detection and the interaction between the safety areas of an Automated Guided Vehicles (AGV) and a Point Cloud derived map inside the context of a CAD software. The first part of the project focuses on the implementation of an algorithm for the clipping of general polygons, with which has been possible to: construct the safety areas polygon, derive the sweep of this areas along the navigation path performing a union and detect the intersections with line or polygon representing the obstacles. The second part is about the construction of a map in terms of geometric entities (lines and polygons) starting from a point cloud given by the 3D scan of the environment. The point cloud is processed using: filters, clustering algorithms and concave/convex hull derived algorithms in order to extract line and polygon entities representing obstacles. Finally, the last part aims to use the a priori knowledge of possible obstacle detections on a given segment, to predict the behavior of the AGV and use this prediction to optimize the choice of the vehicle's assigned velocity in that segment, minimizing the travel time.
Resumo:
This article describes an effective microchip protocol based on electrophoretic-separation and electrochemical detection for highly sensitive and rapid measurements of nitrate ester explosives, including ethylene glycol dinitrate (EGDN), pentaerythritol tetranitrate (PETN), propylene glycol dinitrate (PGDN) and glyceryl trinitrate (nitroglycerin, NG). Factors influencing the separation and detection processes were examined and optimized. Under the optimal separation conditions obtained using a 15 mM borate buffer (pH 9.2) containing 20 mM SDS, and applying a separation voltage of 1500 V, the four nitrate ester explosives were separated within less than 3 min. The glassy-carbon amperometric detector (operated at -0.9 V vs. Ag/AgCl) offers convenient cathodic detection down to the picogram level, with detection limits of 0.5 ppm and 0.3 ppm for PGDN and for NG, respectively, along with good repeatability (RSD of 1.8-2.3%; n = 6) and linearity (over the 10-60 ppm range). Such effective microchip operation offers great promise for field screening of nitrate ester explosives and for supporting various counter-terrorism surveillance activities.
Resumo:
A new procedure for spectrofluorimetric determination of free and total glycerol in biodiesel samples is presented. It is based on the oxidation of glycerol by periodate, forming formaldehyde, which reacts with acetylacetone, producing the luminescent 3,5-diacetyl-1,4-dihydrolutidine. A flow system with solenoid micro-pumps is proposed for solution handling. Free glycerol was extracted off-line from biodiesel samples with water, and total glycerol was converted to free glycerol by saponification with sodium ethylate under sonication. For free glycerol, a linear response was observed from 5 to 70 mg L(-1) with a detection limit of 0.5 mg L(-1), which corresponds to 2 mg kg(-1) in biodiesel. The coefficient of variation was 0.9% (20 mg L(-1), n = 10). For total glycerol, samples were diluted on-line, and the linear response range was 25 to 300 mg L(-1). The detection limit was 1.4 mg L(-1) (2.8 mg kg(-1) in biodiesel) with a coefficient of variation of 1.4% (200 mg L(-1), n = 10). The sampling rate was ca. 35 samples h(-1) and the procedure was applied to determination of free and total glycerol in biodiesel samples from soybean, cottonseed, and castor beans.
Resumo:
The crystallisation behaviour for alloys in the Al-rich corner in the Al-La-Ni system is reported in this paper Alloys were selected based on the topological instability criterion (lambda criterion) calculated from the alloy composition and metallic radii of the alloying elements and aluminum Amorphous ribbons were produced by melt-spinning and the crystallisation reactions were analysed by X-ray diffraction and calorimetry The results showed that increasing the values of lambda from 0.072 to 0.16 resulted in the following changes in the crystallisation behaviour, as predicted by the lambda criterion (a) nanocrystallisation of alpha-Al for the alloy composition corresponding to lambda = 0 072 and (b) detection of the glass transition temperature, T(g), for the alloys with composition close to lambda approximate to 0.1 line. For compositions corresponding to both ends of the lambda approximate to 0 1 line (near the binaries lines) T(g) could be detected only in the ""intermediary"" central region, and the alloy we produced in this region was considered the best glass former for the Al-rich corner Also, except for the alloys with the highest NI content, crystallisation proceeded by two distinct exothermic peaks which are typical of nanocrystallisation transformation. These behaviours are discussed in terms of compositional (lambda parameter) and topological aspects to account for cluster formation in the amorphous phase. Crown Copyright (C) 2009 Published by Elsevier B V All rights reserved
Resumo:
In this paper, a framework for detection of human skin in digital images is proposed. This framework is composed of a training phase and a detection phase. A skin class model is learned during the training phase by processing several training images in a hybrid and incremental fuzzy learning scheme. This scheme combines unsupervised-and supervised-learning: unsupervised, by fuzzy clustering, to obtain clusters of color groups from training images; and supervised to select groups that represent skin color. At the end of the training phase, aggregation operators are used to provide combinations of selected groups into a skin model. In the detection phase, the learned skin model is used to detect human skin in an efficient way. Experimental results show robust and accurate human skin detection performed by the proposed framework.
Resumo:
This paper presents the recent finding by Muhlhaus et al [1] that bifurcation of crack growth patterns exists for arrays of two-dimensional cracks. This bifurcation is a result of the nonlinear effect due to crack interaction, which is, in the present analysis, approximated by the dipole asymptotic or pseudo-traction method. The nonlinear parameter for the problem is the crack length/ spacing ratio lambda = a/h. For parallel and edge crack arrays under far field tension, uniform crack growth patterns (all cracks having same size) yield to nonuniform crack growth patterns (i.e. bifurcation) if lambda is larger than a critical value lambda(cr) (note that such bifurcation is not found for collinear crack arrays). For parallel and edge crack arrays respectively, the value of lambda(cr) decreases monotonically from (2/9)(1/2) and (2/15.096)(1/2) for arrays of 2 cracks, to (2/3)(1/2)/pi and (2/5.032)(1/2)/pi for infinite arrays of cracks. The critical parameter lambda(cr) is calculated numerically for arrays of up to 100 cracks, whilst discrete Fourier transform is used to obtain the exact solution of lambda(cr) for infinite crack arrays. For geomaterials, bifurcation can also occurs when array of sliding cracks are under compression.
Resumo:
Toxoplasma gondii causes severe disease both to man and livestock and its detection in meat after slaughtering requires PCR or biological tests. Meat packages contain retained exudate that could be used for serology due to its blood content. Similar studies reported false negative assays in those tests. We standardized an anti-T. gondii IgG ELISA in muscle juices from experimentally infected rabbits, with blood content determination by cyanhemoglobin spectrophotometry. IgG titers and immunoblotting profiles were similar in blood, serum or meat juice, after blood content correction. These assays were adequate regardless of the storage time up to 120 days or freeze-thaw cycles, without false negative results. We also found 1.35% (1/74) positive sample in commercial Brazilian rabbit meat cuts, by this assay. The blood content determination shows ELISA of meat juice may be useful for quality control for toxoplasmosis monitoring. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Neonatal calf diarrhea is a multi-etiology syndrome of cattle and direct detection of the two major agents of the syndrome, group A rotavirus and Bovine coronavirus (BCoV) is hampered by their fastidious growth in cell culture. This study aimed at developing a multiplex semi-nested RT-PCR for simultaneous detection of BCoV (N gene) and group A rotavirus (VP1 gene) with the addition of an internal control (mRNA ND5). The assay was tested in 75 bovine feces samples tested previously for rotavirus using PAGE and for BCoV using nested RT-PCR targeted to RdRp gene. Agreement with reference tests was optimal for BCoV (kappa = 0.833) and substantial for rotavirus detection (kappa = 0.648). the internal control, ND5 mRNA, was detected successfully in all reactions. Results demonstrated that this multiplex semi-nested RT-PCR was effective in the detection of BCoV and rotavirus, with high sensitivity and specificity for simultaneous detection of both viruses at a lower cost, providing an important tool for studies on the etiology of diarrhea in cattle. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A blocking ELISA targeting an immunodominant West Nile epitope on the West Nile Virus NS1 protein was assessed for the detection of West Nile-specific antibodies in blood samples collected from 584 sentinel chickens and 238 wild birds collected in-New Jersey from May-December 2000. Ten mallard ducks (Anas platyrhynchos) experimentally infected with West Nile virus and six uninfected controls were also tested. The ELISA proved specific in detecting WNV antibodies in 9/10 chickens and 4/4 wild birds previously confirmed as positive by Plaque Reduction Neutralization test (PRNT) at the Center for Disease Control, Division of Vector Borne Diseases, Fort Collins, CO, USA (CDC). Nine out of the ten experimentally infected mallard ducks also tested positive for WN antibodies in the blocking ELISA, while 6/6 uninfected controls did not. Additionally, 1705 wild birds, collected in New Jersey from December 2000-November 2001 and Long Island, New York between November 1999 and August 2001 were also tested for WN antibodies by the blocking ELISA. These tests identified 30 positive specimens, 12 of which had formalin-fixed tissues available to allow detection of WN specific viral antigen in various tissues by WNV-specific immunohistochemistry. Our results indicate that rapid and specific detection of antibodies to WN virus in sera from a range of avian species by blocking ELISA is an effective strategy for WN Virus surveillance in avian hosts. In combination with detection of WN-specific antigens in tissues by immunohistochemistry (IHC) the blocking ELISA will also be useful for confirming WN infection in diseased birds.