959 resultados para Detection process
Resumo:
The Acoustic emission (AE) technique, as one of non-intrusive and nondestructive evaluation techniques, acquires and analyzes the signals emitting from deformation or fracture of materials/structures under service loading. The AE technique has been successfully applied in damage detection in various materials such as metal, alloy, concrete, polymers and other composite materials. In this study, the AE technique was used for detecting crack behavior within concrete specimens under mechanical and environmental frost loadings. The instrumentations of the AE system used in this study include a low-frequency AE sensor, a computer-based data acquisition device and a preamplifier linking the AE sensor and the data acquisition device. The AE system purchased from Mistras Group was used in this study. The AE technique was applied to detect damage with the following laboratory tests: the pencil lead test, the mechanical three-point single-edge notched beam bending (SEB) test, and the freeze-thaw damage test. Firstly, the pencil lead test was conducted to verify the attenuation phenomenon of AE signals through concrete materials. The value of attenuation was also quantified. Also, the obtained signals indicated that this AE system was properly setup to detect damage in concrete. Secondly, the SEB test with lab-prepared concrete beam was conducted by employing Mechanical Testing System (MTS) and AE system. The cumulative AE events and the measured loading curves, which both used the crack-tip open displacement (CTOD) as the horizontal coordinate, were plotted. It was found that the detected AE events were qualitatively correlated with the global force-displacement behavior of the specimen. The Weibull distribution was vii proposed to quantitatively describe the rupture probability density function. The linear regression analysis was conducted to calibrate the Weibull distribution parameters with detected AE signals and to predict the rupture probability as a function of CTOD for the specimen. Finally, the controlled concrete freeze-thaw cyclic tests were designed and the AE technique was planned to investigate the internal frost damage process of concrete specimens.
Resumo:
Lesion detection aids ideally aim at increasing the sensitivity of visual caries detection without trading off too much in terms of specificity. The use of a dental probe (explorer), bitewing radiography and fibre-optic transillumination (FOTI) have long been recommended for this purpose. Today, probing of suspected lesions in the sense of checking the 'stickiness' is regarded as obsolete, since it achieves no gain of sensitivity and might cause irreversible tooth damage. Bitewing radiography helps to detect lesions that are otherwise hidden from visual examination, and it should therefore be applied to a new patient. The diagnostic performance of radiography at approximal and occlusal sites is different, as this relates to the 3-dimensional anatomy of the tooth at these sites. However, treatment decisions have to take more into account than just lesion extension. Bitewing radiography provides additional information for the decision-making process that mainly relies on the visual and clinical findings. FOTI is a quick and inexpensive method which can enhance visual examination of all tooth surfaces. Both radiography and FOTI can improve the sensitivity of caries detection, but require sufficient training and experience to interpret information correctly. Radiography also carries the burden of the risks and legislation associated with using ionizing radiation in a health setting and should be repeated at intervals guided by the individual patient's caries risk. Lesion detection aids can assist in the longitudinal monitoring of the behaviour of initial lesions.
Resumo:
BACKGROUND: Microarray genome analysis is realising its promise for improving detection of genetic abnormalities in individuals with mental retardation and congenital abnormality. Copy number variations (CNVs) are now readily detectable using a variety of platforms and a major challenge is the distinction of pathogenic from ubiquitous, benign polymorphic CNVs. The aim of this study was to investigate replacement of time consuming, locus specific testing for specific microdeletion and microduplication syndromes with microarray analysis, which theoretically should detect all known syndromes with CNV aetiologies as well as new ones. METHODS: Genome wide copy number analysis was performed on 117 patients using Affymetrix 250K microarrays. RESULTS: 434 CNVs (195 losses and 239 gains) were found, including 18 pathogenic CNVs and 9 identified as "potentially pathogenic". Almost all pathogenic CNVs were larger than 500 kb, significantly larger than the median size of all CNVs detected. Segmental regions of loss of heterozygosity larger than 5 Mb were found in 5 patients. CONCLUSIONS: Genome microarray analysis has improved diagnostic success in this group of patients. Several examples of recently discovered "new syndromes" were found suggesting they are more common than previously suspected and collectively are likely to be a major cause of mental retardation. The findings have several implications for clinical practice. The study revealed the potential to make genetic diagnoses that were not evident in the clinical presentation, with implications for pretest counselling and the consent process. The importance of contributing novel CNVs to high quality databases for genotype-phenotype analysis and review of guidelines for selection of individuals for microarray analysis is emphasised.
Resumo:
The assessment of ERa, PgR and HER2 status is routinely performed today to determine the endocrine responsiveness of breast cancer samples. Such determination is usually accomplished by means of immunohistochemistry and in case of HER2 amplification by means of fluorescent in situ hybridization (FISH). The analysis of these markers can be improved by simultaneous measurements using quantitative real-time PCR (Qrt-PCR). In this study we compared Qrt-PCR results for the assessment of mRNA levels of ERa, PgR, and the members of the human epidermal growth factor receptor family, HER1, HER2, HER3 and HER4. The results were obtained in two independent laboratories using two different methods, SYBR Green I and TaqMan probes, and different primers. By linear regression we demonstrated a good concordance for all six markers. The quantitative mRNA expression levels of ERa, PgR and HER2 also strongly correlated with the respective quantitative protein expression levels prospectively detected by EIA in both laboratories. In addition, HER2 mRNA expression levels correlated well with gene amplification detected by FISH in the same biopsies. Our results indicate that both Qrt-PCR methods were robust and sensitive tools for routine diagnostics and consistent with standard methodologies. The developed simultaneous assessment of several biomarkers is fast and labor effective and allows optimization of the clinical decision-making process in breast cancer tissue and/or core biopsies.
Resumo:
The Imager for Low Energetic Neutral Atoms test facility at the University of Bern was developed to investigate, characterize, and quantify physical processes on surfaces that are used to ionize neutral atoms before their analysis in neutral particle-sensing instruments designed for space research. The facility has contributed valuable knowledge of the interaction of ions with surfaces (e.g., fraction of ions scattered from surfaces and angular scattering distribution) and employs a novel measurement principle for the determination of secondary electron emission yields as a function of energy, angle of incidence, particle species, and sample surface for low particle energies. Only because of this test facility it was possible to successfully apply surface-science processes for the new detection technique for low-energetic neutral particles with energies below about 1 keV used in space applications. All successfully flown spectrometers for the detection of low-energetic neutrals based on the particle–surface interaction process use surfaces evaluated, tested, and calibrated in this facility. Many instruments placed on different spacecraft (e.g., Imager for Magnetopause-to-Aurora Global Exploration, Chandrayaan-1, Interstellar Boundary Explorer, etc.) have successfully used this technique.
Resumo:
Apoptosis, a form of programmed cell death, is critical to homoeostasis, normal development, and physiology. Dysregulation of apoptosis can lead to the accumulation of unwanted cells, such as occurs in cancer, and the removal of needed cells or disorders of normal tissues, such as heart, neurodegenerative, and autoimmune diseases. Noninvasive detection of apoptosis may play an important role in the evaluation of disease states and response to therapeutic intervention for a variety of diseases. It is desirable to have an imaging method to accurately detect and monitor this process in patients. In this study, we developed annexin A5-conjugated polymeric micellar nanoparticles dual-labeled with a near-infrared fluorescence fluorophores (Cy7) and a radioisotope (111In), named as 111In-labeled annexin A5-CCPM. In vitro studies demonstrated that annexin A5-CCPM could strongly and specifically bind to apoptotic cells. In vivo studies showed that apoptotic tissues could be clearly visualized by both single photon emission computed tomography (SPECT) and fluorescence molecular tomography (FMT) after intravenous injection of 111In-labeled Annexin A5-CCPM in 6 different apoptosis models. In contrast, there was little signal in respective healthy tissues. All the biodistribution data confirmed imaging results. Moreover, histological analysis revealed that radioactivity count correlated with fluorescence signal from the nanoparticles, and both signals co-localized with the region of apoptosis. In sum, 111In-labeled annexin A5-CCPM allowed visualization of apoptosis by both nuclear and optical imaging techniques. The complementary information acquired with multiple imaging techniques should be advantageous in improving diagnostics and management of patients.
Resumo:
Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.
Resumo:
The near-real time retrieval of low stratiform cloud (LSC) coverage is of vital interest for such disciplines as meteorology, transport safety, economy and air quality. Within this scope, a novel methodology is proposed which provides the LSC occurrence probability estimates for a satellite scene. The algorithm is suited for the 1 × 1 km Advanced Very High Resolution Radiometer (AVHRR) data and was trained and validated against collocated SYNOP observations. Utilisation of these two combined data sources requires a formulation of constraints in order to discriminate cases where the LSC is overlaid by higher clouds. The LSC classification process is based on six features which are first converted to the integer form by step functions and combined by means of bitwise operations. Consequently, a set of values reflecting a unique combination of those features is derived which is further employed to extract the LSC occurrence probability estimates from the precomputed look-up vectors (LUV). Although the validation analyses confirmed good performance of the algorithm, some inevitable misclassification with other optically thick clouds were reported. Moreover, the comparison against Polar Platform System (PPS) cloud-type product revealed superior classification accuracy. From the temporal perspective, the acquired results reported a presence of diurnal and annual LSC probability cycles over Europe.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.
Resumo:
Introduction: In team sports the ability to use peripheral vision is essential to track a number of players and the ball. By using eye-tracking devices it was found that players either use fixations and saccades to process information on the pitch or use smooth pursuit eye movements (SPEM) to keep track of single objects (Schütz, Braun, & Gegenfurtner, 2011). However, it is assumed that peripheral vision can be used best when the gaze is stable while it is unknown whether motion changes can be equally well detected when SPEM are used especially because contrast sensitivity is reduced during SPEM (Schütz, Delipetkose, Braun, Kerzel, & Gegenfurtner, 2007). Therefore, peripheral motion change detection will be examined by contrasting a fixation condition with a SPEM condition. Methods: 13 participants (7 male, 6 female) were presented with a visual display consisting of 15 white and 1 red square. Participants were instructed to follow the red square with their eyes and press a button as soon as a white square begins to move. White square movements occurred either when the red square was still (fixation condition) or moving in a circular manner with 6 °/s (pursuit condition). The to-be-detected white square movements varied in eccentricity (4 °, 8 °, 16 °) and speed (1 °/s, 2 °/s, 4 °/s) while movement time of white squares was constant at 500 ms. 180 events should be detected in total. A Vicon-integrated eye-tracking system and a button press (1000 Hz) was used to control for eye-movements and measure detection rates and response times. Response times (ms) and missed detections (%) were measured as dependent variables and analysed with a 2 (manipulation) x 3 (eccentricity) x 3 (speed) ANOVA with repeated measures on all factors. Results: Significant response time effects were found for manipulation, F(1,12) = 224.31, p < .01, ηp2 = .95, eccentricity, F(2,24) = 56.43; p < .01, ηp2 = .83, and the interaction between the two factors, F(2,24) = 64.43; p < .01, ηp2 = .84. Response times increased as a function of eccentricity for SPEM only and were overall higher than in the fixation condition. Results further showed missed events effects for manipulation, F(1,12) = 37.14; p < .01, ηp2 = .76, eccentricity, F(2,24) = 44.90; p < .01, ηp2 = .79, the interaction between the two factors, F(2,24) = 39.52; p < .01, ηp2 = .77 and the three-way interaction manipulation x eccentricity x speed, F(2,24) = 3.01; p = .03, ηp2 = .20. While less than 2% of events were missed on average in the fixation condition as well as at 4° and 8° eccentricity in the SPEM condition, missed events increased for SPEM at 16 ° eccentricity with significantly more missed events in the 4 °/s speed condition (1 °/s: M = 34.69, SD = 20.52; 2 °/s: M = 33.34, SD = 19.40; 4 °/s: M = 39.67, SD = 19.40). Discussion: It could be shown that using SPEM impairs the ability to detect peripheral motion changes at the far periphery and that fixations not only help to detect these motion changes but also to respond faster. Due to high temporal constraints especially in team sports like soccer or basketball, fast reaction are necessary for successful anticipation and decision making. Thus, it is advised to anchor gaze at a specific location if peripheral changes (e.g. movements of other players) that require a motor response have to be detected. In contrast, SPEM should only be used if a single object, like the ball in cricket or baseball, is necessary for a successful motor response. References: Schütz, A. C., Braun, D. I., & Gegenfurtner, K. R. (2011). Eye movements and perception: A selective review. Journal of Vision, 11, 1-30. Schütz, A. C., Delipetkose, E., Braun, D. I., Kerzel, D., & Gegenfurtner, K. R. (2007). Temporal contrast sensitivity during smooth pursuit eye movements. Journal of Vision, 7, 1-15.
Resumo:
The coagulation of milk is the fundamental process in cheese-making, based on a gel formation as consequence of physicochemical changes taking place in the casein micelles, the monitoring the whole process of milk curd formation is a constant preoccupation for dairy researchers and cheese companies (Lagaude et al., 2004). In addition to advances in composition-based applications of near infrared spectroscopy (NIRS), innovative uses of this technology are pursuing dynamic applications that show promise, especially in regard to tracking a sample in situ during food processing (Bock and Connelly, 2008). In this way the literature describes cheese making process applications of NIRS for curd cutting time determination, which conclude that NIRS would be a suitable method of monitoring milk coagulation, as shown i.e. the works published by Fagan et al. (Fagan et al., 2008; Fagan et al., 2007), based in the use of the commercial CoAguLite probe (with a LED at 880nm and a photodetector for light reflectance detection).
Resumo:
Actualmente la detección del rostro humano es un tema difícil debido a varios parámetros implicados. Llega a ser de interés cada vez mayor en diversos campos de aplicaciones como en la identificación personal, la interface hombre-máquina, etc. La mayoría de las imágenes del rostro contienen un fondo que se debe eliminar/discriminar para poder así detectar el rostro humano. Así, este proyecto trata el diseño y la implementación de un sistema de detección facial humana, como el primer paso en el proceso, dejando abierto el camino, para en un posible futuro, ampliar este proyecto al siguiente paso, que sería, el Reconocimiento Facial, tema que no trataremos aquí. En la literatura científica, uno de los trabajos más importantes de detección de rostros en tiempo real es el algoritmo de Viola and Jones, que ha sido tras su uso y con las librerías de Open CV, el algoritmo elegido para el desarrollo de este proyecto. A continuación explicaré un breve resumen sobre el funcionamiento de mi aplicación. Mi aplicación puede capturar video en tiempo real y reconocer el rostro que la Webcam captura frente al resto de objetos que se pueden visualizar a través de ella. Para saber que el rostro es detectado, éste es recuadrado en su totalidad y seguido si este mueve. A su vez, si el usuario lo desea, puede guardar la imagen que la cámara esté mostrando, pudiéndola almacenar en cualquier directorio del PC. Además, incluí la opción de poder detectar el rostro humano sobre una imagen fija, cualquiera que tengamos guardada en nuestro PC, siendo mostradas el número de caras detectadas y pudiendo visualizarlas sucesivamente cuantas veces queramos. Para todo ello como bien he mencionado antes, el algoritmo usado para la detección facial es el de Viola and Jones. Este algoritmo se basa en el escaneo de toda la superficie de la imagen en busca del rostro humano, para ello, primero la imagen se transforma a escala de grises y luego se analiza dicha imagen, mostrando como resultado el rostro encuadrado. ABSTRACT Currently the detection of human face is a difficult issue due to various parameters involved. Becomes of increasing interest in various fields of applications such as personal identification, the man-machine interface, etc. Most of the face images contain a fund to be removed / discriminate in order to detect the human face. Thus, this project is the design and implementation of a human face detection system, as the first step in the process, leaving the way open for a possible future, extend this project to the next step would be, Facial Recognition , a topic not covered here. In the literature, one of the most important face detection in real time is the algorithm of Viola and Jones, who has been after use with Open CV libraries, the algorithm chosen for the development of this project. I will explain a brief summary of the performance of my application. My application can capture video in real time and recognize the face that the Webcam Capture compared to other objects that can be viewed through it. To know that the face is detected, it is fully boxed and followed if this move. In turn, if the user may want to save the image that the camera is showing, could store in any directory on your PC. I also included the option to detect the human face on a still image, whatever we have stored in your PC, being shown the number of faces detected and can view them on more times. For all as well I mentioned before, the algorithm used for face detection is that of Viola and Jones. This algorithm is based on scanning the entire surface of the image for the human face, for this, first the image is converted to gray-scale and then analyzed the image, showing results in the face framed.
Resumo:
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth?s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.
Resumo:
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.
Resumo:
In order to reduce cost and make up for the rising price of silicon, silicon wafers are sliced thinner and wider,eading to weaker wafers and increased breakage rates during fabrication process. In this work we have analysed different cracks origins and their effect on wafer’s mechanical strength. To enhance wafer’s strength some etching methods have been tested. Also, we have analysed wafers from different points of an entire standard production process. Mechanical strength of the wafers has been obtained via the four line bending test and detection of cracks has been tested with Resonance Ultrasonic Vibration (RUV) system, developed by the University of South Florida.