948 resultados para COMBINING CLASSIFIERS
Resumo:
Brain injury due to lack of oxygen or impaired blood flow around the time of birth, may cause long term neurological dysfunction or death in severe cases. The treatments need to be initiated as soon as possible and tailored according to the nature of the injury to achieve best outcomes. The Electroencephalogram (EEG) currently provides the best insight into neurological activities. However, its interpretation presents formidable challenge for the neurophsiologists. Moreover, such expertise is not widely available particularly around the clock in a typical busy Neonatal Intensive Care Unit (NICU). Therefore, an automated computerized system for detecting and grading the severity of brain injuries could be of great help for medical staff to diagnose and then initiate on-time treatments. In this study, automated systems for detection of neonatal seizures and grading the severity of Hypoxic-Ischemic Encephalopathy (HIE) using EEG and Heart Rate (HR) signals are presented. It is well known that there is a lot of contextual and temporal information present in the EEG and HR signals if examined at longer time scale. The systems developed in the past, exploited this information either at very early stage of the system without any intelligent block or at very later stage where presence of such information is much reduced. This work has particularly focused on the development of a system that can incorporate the contextual information at the middle (classifier) level. This is achieved by using dynamic classifiers that are able to process the sequences of feature vectors rather than only one feature vector at a time.
Resumo:
Taxonomies have gained a broad usage in a variety of fields due to their extensibility, as well as their use for classification and knowledge organization. Of particular interest is the digital document management domain in which their hierarchical structure can be effectively employed in order to organize documents into content-specific categories. Common or standard taxonomies (e.g., the ACM Computing Classification System) contain concepts that are too general for conceptualizing specific knowledge domains. In this paper we introduce a novel automated approach that combines sub-trees from general taxonomies with specialized seed taxonomies by using specific Natural Language Processing techniques. We provide an extensible and generalizable model for combining taxonomies in the practical context of two very large European research projects. Because the manual combination of taxonomies by domain experts is a highly time consuming task, our model measures the semantic relatedness between concept labels in CBOW or skip-gram Word2vec vector spaces. A preliminary quantitative evaluation of the resulting taxonomies is performed after applying a greedy algorithm with incremental thresholds used for matching and combining topic labels.
Resumo:
Taphonomic research of bones can provide additional insight into a site's formation and development, the burial environment and ongoing post-mortem processes. A total of 30 tortoise (Cylindraspis) femur bone samples from the Mare aux Songes site (Mauritius)were studied histologically, assessing parameters such as presence and type of microbial alteration, inclusions, staining/infiltrations, the degree of microcracking and birefringence. The absence of microbial attack in the 4200 year old Mare aux Songes bones suggests the animals rapidly entered the soil whole-bodied and were sealed anoxically, although they suffered frombiological and chemical degradation (i.e. pyrite formation/oxidation, mineral dissolution and staining) related to changes in the site's hydrology. Additionally, carbon and nitrogen stable isotopeswere analysed to obtain information on the animals' feeding behaviour. The results show narrowly distributed δ13C ratios, indicating a terrestrial C3 plant-based diet, combined with a wide range in δ15N ratios. This is most likely related to the tortoises' drought-adaptive ability to change their metabolic processes, which can affect the δ15N ratios. Furthermore, ZooMS collagen fingerprinting analysis successfully identified two tortoise species (C. triserrata and C. inepta) in the bone assemblage,which,when combined with stable isotope data, revealed significantly different δ15N ratios between the two tortoise species. As climatic changes around this period resulted in increased aridity in the Mascarene Islands, this could explain the extremely elevated δ15N ratio in our dataset. The endemic fauna was able to endure the climatic changes 4200 years ago, although human arrival in the 17th century changed the original habitat to such an extent that it resulted in the extinction of several species. Fortunately we are still able to study these extinct tortoises due to the beneficial conditions of their burial environment, resulting in excellent bone preservation.
Resumo:
[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?
Resumo:
Requirement engineering is a key issue in the development of a software project. Like any other development activity it is not without risks. This work is about the empirical study of risks of requirements by applying machine learning techniques, specifically Bayesian networks classifiers. We have defined several models to predict the risk level for a given requirement using three dataset that collect metrics taken from the requirement specifications of different projects. The classification accuracy of the Bayesian models obtained is evaluated and compared using several classification performance measures. The results of the experiments show that the Bayesians networks allow obtaining valid predictors. Specifically, a tree augmented network structure shows a competitive experimental performance in all datasets. Besides, the relations established between the variables collected to determine the level of risk in a requirement, match with those set by requirement engineers. We show that Bayesian networks are valid tools for the automation of risks assessment in requirement engineering.
Resumo:
In the last decade, research in Computer Vision has developed several algorithms to help botanists and non-experts to classify plants based on images of their leaves. LeafSnap is a mobile application that uses a multiscale curvature model of the leaf margin to classify leaf images into species. It has achieved high levels of accuracy on 184 tree species from Northeast US. We extend the research that led to the development of LeafSnap along two lines. First, LeafSnap’s underlying algorithms are applied to a set of 66 tree species from Costa Rica. Then, texture is used as an additional criterion to measure the level of improvement achieved in the automatic identification of Costa Rica tree species. A 25.6% improvement was achieved for a Costa Rican clean image dataset and 42.5% for a Costa Rican noisy image dataset. In both cases, our results show this increment as statistically significant. Further statistical analysis of visual noise impact, best algorithm combinations per species, and best value of , the minimal cardinality of the set of candidate species that the tested algorithms render as best matches is also presented in this research
Resumo:
This dissertation is concerned with the control, combining, and propagation of laser beams through a turbulent atmosphere. In the first part we consider adaptive optics: the process of controlling the beam based on information of the current state of the turbulence. If the target is cooperative and provides a coherent return beam, the phase measured near the beam transmitter and adaptive optics can, in principle, correct these fluctuations. However, for many applications, the target is uncooperative. In this case, we show that an incoherent return from the target can be used instead. Using the principle of reciprocity, we derive a novel relation between the field at the target and the scattered field at a detector. We then demonstrate through simulation that an adaptive optics system can utilize this relation to focus a beam through atmospheric turbulence onto a rough surface. In the second part we consider beam combining. To achieve the power levels needed for directed energy applications it is necessary to combine a large number of lasers into a single beam. The large linewidths inherent in high-power fiber and slab lasers cause random phase and intensity fluctuations occurring on sub-nanosecond time scales. We demonstrate that this presents a challenging problem when attempting to phase-lock high-power lasers. Furthermore, we show that even if instruments are developed that can precisely control the phase of high-power lasers; coherent combining is problematic for DE applications. The dephasing effects of atmospheric turbulence typically encountered in DE applications will degrade the coherent properties of the beam before it reaches the target. Finally, we investigate the propagation of Bessel and Airy beams through atmospheric turbulence. It has been proposed that these quasi-non-diffracting beams could be resistant to the effects of atmospheric turbulence. However, we find that atmospheric turbulence disrupts the quasi-non-diffracting nature of Bessel and Airy beams when the transverse coherence length nears the initial aperture diameter or diagonal respectively. The turbulence induced transverse phase distortion limits the effectiveness of Bessel and Airy beams for applications requiring propagation over long distances in the turbulent atmosphere.
Resumo:
Due to trends in aero-design, aeroelasticity becomes increasingly important in modern turbomachines. Design requirements of turbomachines lead to the development of high aspect ratio blades and blade integral disc designs (blisks), which are especially prone to complex modes of vibration. Therefore, experimental investigations yielding high quality data are required for improving the understanding of aeroelastic effects in turbomachines. One possibility to achieve high quality data is to excite and measure blade vibrations in turbomachines. The major requirement for blade excitation and blade vibration measurements is to minimize interference with the aeroelastic effects to be investigated. Thus in this paper, a non-contact-and thus low interference-experimental set-up for exciting and measuring blade vibrations is proposed and shown to work. A novel acoustic system excites rotor blade vibrations, which are measured with an optical tip-timing system. By performing measurements in an axial compressor, the potential of the acoustic excitation method for investigating aeroelastic effects is explored. The basic principle of this method is described and proven through the analysis of blade responses at different acoustic excitation frequencies and at different rotational speeds. To verify the accuracy of the tip-timing system, amplitudes measured by tip-timing are compared with strain gage measurements. They are found to agree well. Two approaches to vary the nodal diameter (ND) of the excited vibration mode by controlling the acoustic excitation are presented. By combining the different excitable acoustic modes with a phase-lag control, each ND of the investigated 30 blade rotor can be excited individually. This feature of the present acoustic excitation system is of great benefit to aeroelastic investigations and represents one of the main advantages over other excitation methods proposed in the past. In future studies, the acoustic excitation method will be used to investigate aeroelastic effects in high-speed turbomachines in detail. The results of these investigations are to be used to improve the aeroelastic design of modern turbomachines.
Resumo:
Part 19: Knowledge Management in Networks
Resumo:
Background: Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results: We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion: ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.