980 resultados para Identification algorithms
Resumo:
The emergence of new horizons in the field of travel assistant management leads to the development of cutting-edge systems focused on improving the existing ones. Moreover, new opportunities are being also presented since systems trend to be more reliable and autonomous. In this paper, a self-learning embedded system for object identification based on adaptive-cooperative dynamic approaches is presented for intelligent sensor’s infrastructures. The proposed system is able to detect and identify moving objects using a dynamic decision tree. Consequently, it combines machine learning algorithms and cooperative strategies in order to make the system more adaptive to changing environments. Therefore, the proposed system may be very useful for many applications like shadow tolls since several types of vehicles may be distinguished, parking optimization systems, improved traffic conditions systems, etc.
Resumo:
Several basic olfactory tasks must be solved by highly olfactory animals, including background suppression, multiple object separation, mixture separation, and source identification. The large number N of classes of olfactory receptor cells—hundreds or thousands—permits the use of computational strategies and algorithms that would not be effective in a stimulus space of low dimension. A model of the patterns of olfactory receptor responses, based on the broad distribution of olfactory thresholds, is constructed. Representing one odor from the viewpoint of another then allows a common description of the most important basic problems and shows how to solve them when N is large. One possible biological implementation of these algorithms uses action potential timing and adaptation as the “hardware” features that are responsible for effective neural computation.
Resumo:
We sought to create a comprehensive catalog of yeast genes whose transcript levels vary periodically within the cell cycle. To this end, we used DNA microarrays and samples from yeast cultures synchronized by three independent methods: α factor arrest, elutriation, and arrest of a cdc15 temperature-sensitive mutant. Using periodicity and correlation algorithms, we identified 800 genes that meet an objective minimum criterion for cell cycle regulation. In separate experiments, designed to examine the effects of inducing either the G1 cyclin Cln3p or the B-type cyclin Clb2p, we found that the mRNA levels of more than half of these 800 genes respond to one or both of these cyclins. Furthermore, we analyzed our set of cell cycle–regulated genes for known and new promoter elements and show that several known elements (or variations thereof) contain information predictive of cell cycle regulation. A full description and complete data sets are available at http://cellcycle-www.stanford.edu
Resumo:
The low complexity of IIR adaptive filters (AFs) is specially appealing to realtime applications but some drawbacks have been preventing their widespread use so far. For gradient based IIR AFs, adverse operational conditions cause convergence problems in system identification scenarios: underdamped and clustered poles, undermodelling or non-white input signals lead to error surfaces where the adaptation nearly stops on large plateaus or get stuck at sub-optimal local minima that can not be identified as such a priori. Furthermore, the non-stationarity in the input regressor brought by the filter recursivity and the approximations made by the update rules of the stochastic gradient algorithms constrain the learning step size to small values, causing slow convergence. In this work, we propose IIR performance enhancement strategies based on hybrid combinations of AFs that achieve higher convergence rates than ordinary IIR AFs while keeping the stability.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Texas Department of Transportation, Austin
Resumo:
Magnification factors specify the extent to which the area of a small patch of the latent (or `feature') space of a topographic mapping is magnified on projection to the data space, and are of considerable interest in both neuro-biological and data analysis contexts. Previous attempts to consider magnification factors for the self-organizing map (SOM) algorithm have been hindered because the mapping is only defined at discrete points (given by the reference vectors). In this paper we consider the batch version of SOM, for which a continuous mapping can be defined, as well as the Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) which has been introduced as a probabilistic formulation of the SOM. We show how the techniques of differential geometry can be used to determine magnification factors as continuous functions of the latent space coordinates. The results are illustrated here using a problem involving the identification of crab species from morphological data.
Resumo:
The study developed statistical techniques to evaluate visual field progression for use with the Humphrey Field Analyzer (HFA). The long-term fluctuation (LF) was evaluated in stable glaucoma. The magnitude of both LF components showed little relationship with MD, CPSD and SF. An algorithm was proposed for determining the clinical necessity for a confirmatory follow-up examination. The between-examination variability was determined for the HFA Standard and FASTPAC algorithms in glaucoma. FASTPAC exhibited greater between-examination variability than the Standard algorithm across the range of sensitivities and with increasing eccentricity. The difference in variability between the algorithms had minimal clinical significance. The effect of repositioning the baseline in the Glaucoma Change Probability Analysis (GCPA) was evaluated. The global baseline of the GCPA limited the detection of progressive change at a single stimulus location. A new technique, pointwise univariate linear regressions (ULR), of absolute sensitivity and, of pattern deviation, against time to follow-up was developed. In each case, pointwise ULR was more sensitive to localised progressive changes in sensitivity than ULR of MD, alone. Small changes in sensitivity were more readily determined by the pointwise ULR than by the GCPA. A comparison between the outcome of pointwise ULR for all fields and for the last six fields manifested linear and curvilinear declines in the absolute sensitivity and the pattern deviation. A method for delineating progressive loss in glaucoma, based upon the error in the forecasted sensitivity of a multivariate model, was developed. Multivariate forecasting exhibited little agreement with GCPA in glaucoma but showed promise for monitoring visual field progression in OHT patients. The recovery of sensitivity in optic neuritis over time was modelled with a Cumulative Gaussian function. The rate and level of recovery was greater in the peripheral than the central field. Probability models to forecast the field of recovery were proposed.
An agent approach to improving radio frequency identification enabled Returnable Transport Equipment
Resumo:
Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients.
Resumo:
False friends are pairs of words in two languages that are perceived as similar but have different meanings. We present an improved algorithm for acquiring false friends from sentence-level aligned parallel corpus based on statistical observations of words occurrences and co-occurrences in the parallel sentences. The results are compared with an entirely semantic measure for cross-lingual similarity between words based on using the Web as a corpus through analyzing the words’ local contexts extracted from the text snippets returned by searching in Google. The statistical and semantic measures are further combined into an improved algorithm for identification of false friends that achieves almost twice better results than previously known algorithms. The evaluation is performed for identifying cognates between Bulgarian and Russian but the proposed methods could be adopted for other language pairs for which parallel corpora and bilingual glossaries are available.
Resumo:
Terrestrial remote sensing imagery involves the acquisition of information from the Earth's surface without physical contact with the area under study. Among the remote sensing modalities, hyperspectral imaging has recently emerged as a powerful passive technology. This technology has been widely used in the fields of urban and regional planning, water resource management, environmental monitoring, food safety, counterfeit drugs detection, oil spill and other types of chemical contamination detection, biological hazards prevention, and target detection for military and security purposes [2-9]. Hyperspectral sensors sample the reflected solar radiation from the Earth surface in the portion of the spectrum extending from the visible region through the near-infrared and mid-infrared (wavelengths between 0.3 and 2.5 µm) in hundreds of narrow (of the order of 10 nm) contiguous bands [10]. This high spectral resolution can be used for object detection and for discriminating between different objects based on their spectral xharacteristics [6]. However, this huge spectral resolution yields large amounts of data to be processed. For example, the Airbone Visible/Infrared Imaging Spectrometer (AVIRIS) [11] collects a 512 (along track) X 614 (across track) X 224 (bands) X 12 (bits) data cube in 5 s, corresponding to about 140 MBs. Similar data collection ratios are achieved by other spectrometers [12]. Such huge data volumes put stringent requirements on communications, storage, and processing. The problem of signal sbspace identification of hyperspectral data represents a crucial first step in many hypersctral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction (DR) yelding gains in data storage and retrieval and in computational time and complexity. Additionally, DR may also improve algorithms performance since it reduce data dimensionality without losses in the useful signal components. The computation of statistical estimates is a relevant example of the advantages of DR, since the number of samples required to obtain accurate estimates increases drastically with the dimmensionality of the data (Hughes phnomenon) [13].
Resumo:
In the last decade, research in Computer Vision has developed several algorithms to help botanists and non-experts to classify plants based on images of their leaves. LeafSnap is a mobile application that uses a multiscale curvature model of the leaf margin to classify leaf images into species. It has achieved high levels of accuracy on 184 tree species from Northeast US. We extend the research that led to the development of LeafSnap along two lines. First, LeafSnap’s underlying algorithms are applied to a set of 66 tree species from Costa Rica. Then, texture is used as an additional criterion to measure the level of improvement achieved in the automatic identification of Costa Rica tree species. A 25.6% improvement was achieved for a Costa Rican clean image dataset and 42.5% for a Costa Rican noisy image dataset. In both cases, our results show this increment as statistically significant. Further statistical analysis of visual noise impact, best algorithm combinations per species, and best value of , the minimal cardinality of the set of candidate species that the tested algorithms render as best matches is also presented in this research
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
Background: Gene expression studies are a prerequisite for understanding the biological function of genes. Because of its high sensitivity and easy use, quantitative PCR (qPCR) has become the gold standard for gene expression quantification. To normalise qPCR measurements between samples, the most prominent technique is the use of stably expressed endogenous control genes, the so called reference genes. However, recent studies show there is no universal reference gene for all biological questions. Roses are important ornamental plants for which there has been no evaluation of useful reference genes for gene expression studies. Results: We used three different algorithms (BestKeeper, geNorm and NormFinder) to validate the expression stability of nine candidate reference genes in different rose tissues from three different genotypes of Rosa hybrida and in leaves treated with various stress factors. The candidate genes comprised the classical "housekeeping genes" (Actin, EF-1α, GAPDH, Tubulin and Ubiquitin), and genes showing stable expression in studies in Arabidopsis (PP2A, SAND, TIP and UBC). The programs identified no single gene that showed stable expression under all of the conditions tested, and the individual rankings of the genes differed between the algorithms. Nevertheless the new candidate genes, specifically, PP2A and UBC, were ranked higher as compared to the other traditional reference genes. In general, Tubulin showed the most variable expression and should be avoided as a reference gene. Conclusions: Reference genes evaluated as suitable in experiments with Arabidopsis thaliana were stably expressed in roses under various experimental conditions. In most cases, these genes outperformed conventional reference genes, such as EF1-α and Tubulin. We identified PP2A, SAND and UBC as suitable reference genes, which in different combinations may be used for normalisation in expression analyses via qPCR for different rose tissues and stress treatments. However, the vast genetic variation found within the genus Rosa, including differences in ploidy levels, might also influence expression stability of reference genes, so that future research should also consider different genotypes and ploidy levels.
Resumo:
The main purpose of this study is to evaluate the best set of features that automatically enables the identification of argumentative sentences from unstructured text. As corpus, we use case laws from the European Court of Human Rights (ECHR). Three kinds of experiments are conducted: Basic Experiments, Multi Feature Experiments and Tree Kernel Experiments. These experiments are basically categorized according to the type of features available in the corpus. The features are extracted from the corpus and Support Vector Machine (SVM) and Random Forest are the used as Machine learning algorithms. We achieved F1 score of 0.705 for identifying the argumentative sentences which is quite promising result and can be used as the basis for a general argument-mining framework.