959 resultados para time-image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background - Image blurring in Full Field Digital Mammography (FFDM) is reported to be a problem within many UK breast screening units resulting in significant proportion of technical repeats/recalls. Our study investigates monitors of differing pixel resolution, and whether there is a difference in blurring detection between a 2.3 MP technical review monitor and a 5MP standard reporting monitor. Methods - Simulation software was created to induce different magnitudes of blur on 20 artifact free FFDM screening images. 120 blurred and non-blurred images were randomized and displayed on the 2.3 and 5MP monitors; they were reviewed by 28 trained observers. Monitors were calibrated to the DICOM Grayscale Standard Display Function. T-test was used to determine whether significant differences exist in blurring detection between the monitors. Results - The blurring detection rate on the 2.3MP monitor for 0.2, 0.4, 0.6, 0.8 and 1 mm blur was 46, 59, 66, 77and 78% respectively; and on the 5MP monitor 44, 70, 83 , 96 and 98%. All the non-motion images were identified correctly. A statistical difference (p <0.01) in the blurring detection rate between the two monitors was demonstrated. Conclusions - Given the results of this study and knowing that monitors as low as 1 MP are used in clinical practice, we speculate that technical recall/repeat rates because of blurring could be reduced if higher resolution monitors are used for technical review at the time of imaging. Further work is needed to determine monitor minimum specification for visual blurring detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today's man is socially absorbed by problematic body issues and everything that this means and involves. Literature, publicity, science, technology and medicine compound these issues in a form of this theme that has never been seen before. In the artistic framework, body image is constantly suffering modifications. Body image in sculpture unfolds itself, assuming different messages and different forms. The body is a synonym of subject, an infinite metaphorical history of our looks, desires, that leads one to interrogate his/her image and social and sexual relations. These are understood as a manifestation of individual desires freed from a moral and social imposition. It attempts a return to profound human nature before we are turned into a cloning industry. In thisstudy it isimportant for usto understand in which form doessculpture reflect body image as a sociocultural and psychological phenomenon within the coordinates of our time. To understand how they represent and what artists represent in sculpture as a multiple and complex structure of human sexuality. Today, the sculptural body, expanding its representation, no longer as a reproduction of the corporal characteristics, presents the body in what it possesses of most intimate, unique, human and real, that moves, reacts, feels, suffers and pulsates, a mirror of us all.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class­ based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state­ of ­the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state­ of­ the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of blasting is to produce optimum fragmentation for downstream processing. Fragmentation is usually considered optimum when the average fragment size is minimum and the fragmentation distribution as uniform as possible. One of the parameters affecting blasting fragmentation is believed to be time delay between holes of the same row. Although one can find a significant number of studies in the literature, which examine the relationship between time delay and fragmentation, their results have been often controversial. The purpose of this work is to increase the level of understanding of how time delay between holes of the same row affects fragmentation. Two series of experiments were conducted for this purpose. The first series involved tests on small scale grout and granite blocks to determine the moment of burden detachment. The instrumentation used for these experiments consisted mainly of strain gauges and piezoelectric sensors. Some experiments were also recorded with a high speed camera. It was concluded that the time of detachment for this specific setup is between 300 and 600 μs. The second series of experiments involved blasting of a 2 meter high granite bench and its purpose was the determination of the hole-to-hole delay that provides optimum fragmentation. The fragmentation results were assessed with image analysis software. Moreover, vibration was measured close to the blast and the experiments were recorded with high speed cameras. The results suggest that fragmentation was optimum when delays between 4 and 6 ms were used for this specific setup. Also, it was found that the moment at which gases first appear to be venting from the face was consistently around 6 ms after detonation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report describes the realization of a system, in which an object detection model will be implemented, whose aim is to detect the presence of people in images. This system could be used for several applications: for example, it could be carried on board an aircraft or a drone. In this case, the system is designed in such a way that it can be mounted on light/medium weight helicopters, helping the operator to find people in emergency situations. In the first chapter the use of helicopters for civil protection is analysed and applications similar to this case study are listed. The second chapter describes the choice of the hardware devices that have been used to implement a prototype of a system to collect, analyse and display images. At first, the PC necessary to process the images was chosen, based on the characteristics of the algorithms that are necessary to run the analysis. In the further, a camera that could be compatible with the PC was selected. Finally, the battery pack was chosen taking into account the electrical consumption of the devices. The third chapter illustrates the algorithms used for image analysis. In the fourth, some of the requirements listed in the regulations that must be taken into account for carrying on board all the devices have been briefly analysed. In the fifth chapter the activity of design and modelling, with the CAD Solidworks, the devices and a prototype of a case that will house them is described. The sixth chapter discusses the additive manufacturing, since the case was printed exploiting this technology. In the seventh chapter, part of the tests that must be carried out on the equipment to certificate it have been analysed, and some simulations have been carried out. In the eighth chapter the results obtained once loaded the object detection model on a hardware for image analyses were showed. In the ninth chapter, conclusions and future applications were discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To investigate the degree of T2 relaxometry changes over time in groups of patients with familial mesial temporal lobe epilepsy (FMTLE) and asymptomatic relatives. We conducted both cross-sectional and longitudinal analyses of T2 relaxometry with Aftervoxel, an in-house software for medical image visualization. The cross-sectional study included 35 subjects (26 with FMTLE and 9 asymptomatic relatives) and 40 controls; the longitudinal study was composed of 30 subjects (21 with FMTLE and 9 asymptomatic relatives; the mean time interval of MRIs was 4.4 ± 1.5 years) and 16 controls. To increase the size of our groups of patients and relatives, we combined data acquired in 2 scanners (2T and 3T) and obtained z-scores using their respective controls. General linear model on SPSS21® was used for statistical analysis. In the cross-sectional analysis, elevated T2 relaxometry was identified for subjects with seizures and intermediate values for asymptomatic relatives compared to controls. Subjects with MRI signs of hippocampal sclerosis presented elevated T2 relaxometry in the ipsilateral hippocampus, while patients and asymptomatic relatives with normal MRI presented elevated T2 values in the right hippocampus. The longitudinal analysis revealed a significant increase in T2 relaxometry for the ipsilateral hippocampus exclusively in patients with seizures. The longitudinal increase of T2 signal in patients with seizures suggests the existence of an interaction between ongoing seizures and the underlying pathology, causing progressive damage to the hippocampus. The identification of elevated T2 relaxometry in asymptomatic relatives and in patients with normal MRI suggests that genetic factors may be involved in the development of some mild hippocampal abnormalities in FMTLE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corynebacterium species (spp.) are among the most frequently isolated pathogens associated with subclinical mastitis in dairy cows. However, simple, fast, and reliable methods for the identification of species of the genus Corynebacterium are not currently available. This study aimed to evaluate the usefulness of matrix-assisted laser desorption ionization/mass spectrometry (MALDI-TOF MS) for identifying Corynebacterium spp. isolated from the mammary glands of dairy cows. Corynebacterium spp. were isolated from milk samples via microbiological culture (n=180) and were analyzed by MALDI-TOF MS and 16S rRNA gene sequencing. Using MALDI-TOF MS methodology, 161 Corynebacterium spp. isolates (89.4%) were correctly identified at the species level, whereas 12 isolates (6.7%) were identified at the genus level. Most isolates that were identified at the species level with 16 S rRNA gene sequencing were identified as Corynebacterium bovis (n=156; 86.7%) were also identified as C. bovis with MALDI-TOF MS. Five Corynebacterium spp. isolates (2.8%) were not correctly identified at the species level with MALDI-TOF MS and 2 isolates (1.1%) were considered unidentified because despite having MALDI-TOF MS scores >2, only the genus level was correctly identified. Therefore, MALDI-TOF MS could serve as an alternative method for species-level diagnoses of bovine intramammary infections caused by Corynebacterium spp.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sclerosis, which is the most common cause of chronic neurological disability in young adults, is an inflammatory, demyelinating, and neurodegenerative disease of the CNS, which leads to the formation of multiple foci of demyelinated lesions in the white matter. The diagnosis is based currently on magnetic resonance image and evidence of dissemination in time and space. However, this could be facilitated if biomarkers were available to rule out other disorders with similar symptoms as well as to avoid cerebrospinal fluid analysis, which requires an invasive collection. Additionally, the molecular mechanisms of the disease are not completely elucidated, especially those related to the neurodegenerative aspects of the disease. The identification of biomarker candidates and molecular mechanisms of multiple sclerosis may be approached by proteomics. In the last 10 years, proteomic techniques have been applied in different biological samples (CNS tissue, cerebrospinal fluid, and blood) from multiple sclerosis patients and in its experimental model. In this review, we summarize these data, presenting their value to the current knowledge of the disease mechanisms, as well as their importance in identifying biomarkers or treatment targets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix-assisted laser desorption/ionization time-of flight mass spectrometry (MALDI-TOF MS) has been widely used for the identification and classification of microorganisms based on their proteomic fingerprints. However, the use of MALDI-TOF MS in plant research has been very limited. In the present study, a first protocol is proposed for metabolic fingerprinting by MALDI-TOF MS using three different MALDI matrices with subsequent multivariate data analysis by in-house algorithms implemented in the R environment for the taxonomic classification of plants from different genera, families and orders. By merging the data acquired with different matrices, different ionization modes and using careful algorithms and parameter selection, we demonstrate that a close taxonomic classification can be achieved based on plant metabolic fingerprints, with 92% similarity to the taxonomic classifications found in literature. The present work therefore highlights the great potential of applying MALDI-TOF MS for the taxonomic classification of plants and, furthermore, provides a preliminary foundation for future research.