39 resultados para Eyewitness identification accuracy
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.
Resumo:
There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from realworld dynamics even though these are not necessarily deterministic and stationary. In the present study we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose we here propose a recurrence quantification analysis measure that allows tracking potentially curved and disrupted traces in cross recurrence plots. We apply this measure to cross recurrence plots constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Rössler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.
Resumo:
In this paper we propose a new approach for tonic identification in Indian art music and present a proposal for acomplete iterative system for the same. Our method splits the task of tonic pitch identification into two stages. In the first stage, which is applicable to both vocal and instrumental music, we perform a multi-pitch analysis of the audio signal to identify the tonic pitch-class. Multi-pitch analysisallows us to take advantage of the drone sound, which constantlyreinforces the tonic. In the second stage we estimate the octave in which the tonic of the singer lies and is thusneeded only for the vocal performances. We analyse the predominant melody sung by the lead performer in order to establish the tonic octave. Both stages are individually evaluated on a sizable music collection and are shown toobtain a good accuracy. We also discuss the types of errors made by the method.Further, we present a proposal for a system that aims to incrementally utilize all the available data, both audio and metadata in order to identify the tonic pitch. It produces a tonic estimate and a confidence value, and is iterative in nature. At each iteration, more data is fed into the systemuntil the confidence value for the identified tonic is above a defined threshold. Rather than obtain high overall accuracy for our complete database, ultimately our goal is to develop a system which obtains very high accuracy on a subset of the database with maximum confidence.
Resumo:
In this paper, we describe several techniques for detecting tonic pitch value in Indian classical music. In Indian music, the raga is the basic melodic framework and it is built on the tonic. Tonic detection is therefore fundamental for any melodic analysis in Indian classical music. This workexplores detection of tonic by processing the pitch histograms of Indian classic music. Processing of pitch histograms using group delay functions and its ability to amplify certain traits of Indian music in the pitch histogram, is discussed. Three different strategies to detect tonic, namely, the concert method, the template matching and segmented histogram method are proposed. The concert method exploits the fact that the tonic is constant over a piece/concert.templatematchingmethod and segmented histogrammethodsuse the properties: (i) the tonic is always present in the background, (ii) some notes are less inflected and dominant, to detect the tonic of individual pieces. All the three methods yield good results for Carnatic music (90−100% accuracy), while for Hindustanimusic, the templatemethod works best, provided the v¯adi samv¯adi notes for a given piece are known (85%).
Resumo:
The aim of this work was the identification of new metabolites and transformation products (TPs) in chicken muscle from Enrofloxacin (ENR), Ciprofloxacin (CIP), Difloxacin (DIF) and Sarafloxacin (SAR), which are antibiotics that belong to the fluoroquinolones family. The stability of ENR, CIP, DIF and SAR standard solutions versus pH degradation process (from pH 1.5 to 8.0, simulating the pH since the drug is administered until its excretion) and freeze-thawing (F/T) cycles was tested. In addition, chicken muscle samples from medicated animals with ENR were analyzed in order to identify new metabolites and TPs. The identification of the different metabolites and TPs was accomplished by comparison of mass spectral data from samples and blanks, using liquid chromatography coupled to quadrupole time-of-flight (LC-QqToF) and Multiple Mass Defect Filter (MMDF) technique as a pre-filter to remove most of the background noise and endogenous components. Confirmation and structure elucidation was performed by liquid chromatography coupled to linear ion trap quadrupole Orbitrap (LC-LTQ-Orbitrap), due to its mass accuracy and MS/MS capacity for elemental composition determination. As a result, 21 TPs from ENR, 6 TPs from CIP, 14 TPs from DIF and 12 TPs from SAR were identified due to the pH shock and F/T cycles. On the other hand, 14 metabolites were identified from the medicated chicken muscle samples. Formation of CIP and SAR, from ENR and DIF, respectively, and the formation of desethylene-quinolone were the most remarkable identified compounds.
Resumo:
The aim of this work was the identification of new metabolites and transformation products (TPs) in chicken muscle from Enrofloxacin (ENR), Ciprofloxacin (CIP), Difloxacin (DIF) and Sarafloxacin (SAR), which are antibiotics that belong to the fluoroquinolones family. The stability of ENR, CIP, DIF and SAR standard solutions versus pH degradation process (from pH 1.5 to 8.0, simulating the pH since the drug is administered until its excretion) and freeze-thawing (F/T) cycles was tested. In addition, chicken muscle samples from medicated animals with ENR were analyzed in order to identify new metabolites and TPs. The identification of the different metabolites and TPs was accomplished by comparison of mass spectral data from samples and blanks, using liquid chromatography coupled to quadrupole time-of-flight (LC-QqToF) and Multiple Mass Defect Filter (MMDF) technique as a pre-filter to remove most of the background noise and endogenous components. Confirmation and structure elucidation was performed by liquid chromatography coupled to linear ion trap quadrupole Orbitrap (LC-LTQ-Orbitrap), due to its mass accuracy and MS/MS capacity for elemental composition determination. As a result, 21 TPs from ENR, 6 TPs from CIP, 14 TPs from DIF and 12 TPs from SAR were identified due to the pH shock and F/T cycles. On the other hand, 14 metabolites were identified from the medicated chicken muscle samples. Formation of CIP and SAR, from ENR and DIF, respectively, and the formation of desethylene-quinolone were the most remarkable identified compounds.
Resumo:
The purpose of this paper is twofold. First, we construct a DSGE model which spells out explicitly the instrumentation of monetary policy. The interest rate is determined every period depending on the supply and demand for reserves which in turn are affected by fundamental shocks: unforeseeable changes in cash withdrawal, autonomous factors, technology and government spending. Unexpected changes in the monetary conditions of the economy are interpreted as monetary shocks. We show that these monetary shocks have the usual effects on economic activity without the need of imposing additional frictions as limited participation in asset markets or sticky prices. Second, we show that this view of monetary policy may have important consequences for empirical research. In the model, the contemporaneous correlations between interest rates, prices and output are due to the simultaneous effect of all fundamental shocks. We provide an example where these contemporaneous correlations may be misinterpreted as a Taylor rule. In addition, we use the sign of the impact responses of all shocks on output, prices and interest rates derived from the model to identify the sources of shocks in the data.
Resumo:
Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.
Resumo:
This study focuses on identification and exploitation processes among Finnish design entrepreneurs (i.e. selfemployed industrial designers). More specifically, this study strives to find out what design entrepreneurs do when they create new ventures, how venture ideas are identified and how entrepreneurial processes are organized to identify and exploit such venture ideas in the given industrial context. Indeed, what does educated and creative individuals do when they decide to create new ventures, where do the venture ideas originally come from, and moreover, how are venture ideas identified and developed into viable business concepts that are introduced on the markets? From an academic perspective: there is a need to increase our understanding of the interaction between the identification and exploitation of emerging ventures, in this and other empirical contexts. Rather than assuming that venture ideas are constant in time, this study examines how emerging ideas are adjusted to enable exploitation in dynamic market settings. It builds on the insights from previous entrepreneurship process research. The interpretations from the theoretical discussion build on the assumption that the subprocesses of identification and exploitation interact, and moreover, they are closely entwined with each other (e.g. McKelvie & Wiklund, 2004, Davidsson, 2005). This explanation challenges the common assumption that entrepreneurs would first identify venture ideas and then exploit them (e.g. Shane, 2003). The assumption is that exploitation influences identification, just as identification influences exploitation. Based on interviews with design entrepreneurs and external actors (e.g. potential customers, suppliers and collaborators), it appears as identification and exploitation of venture ideas are carried out in close interaction between a number of actors, rather than alone by entrepreneurs. Due to their available resources, design entrepreneurs have a desire to focus on identification related activities and to find external actors that take care of exploitation related activities. The involvement of external actors may have a direct impact on decisionmaking and various activities along the processes of identification and exploitation, which is something that previous research does not particularly emphasize. For instance, Bhave (1994) suggests both operative and strategic feedback from the market, but does not explain how external parties are actually involved in the decisionmaking, and in carrying out various activities along the entrepreneurial process.
Resumo:
DNA based techniques have proved to be very useful methods to study trophic relationships 17 between pests and their natural enemies. However, most predators are best defined as omnivores, 18 and the identification of plant-specific DNA should also allow the identification of the plant 19 species the predators have been feeding on. In this study, a PCR approach based on the 20 development of specific primers was developed as a self-marking technique to detect plant DNA 21 within the gut of one heteropteran omnivorous predator (Macrolophus pygmaeus) and two 22 lepidopteran pest species (Helicoverpa armigera and Tuta absoluta). Specific tomato primers 23 were designed from the ITS 1-2 region, which allowed the amplification of a tomato DNA 24 fragment of 332 bp within the three insect species tested in all cases (100% of detection at t = 0) 25 and did not detect DNA of other plants nor of the starved insects. Plant DNA half-lives at 25ºC 26 ranged from 5.8h, to 27.7h and 28.7h within M. pygmaeus, H. armigera and T. absoluta, 27 respectively. Tomato DNA detection within field collected M. pygmaeus suggests dietary mixing 28 in this omnivorous predator and showed a higher detection of tomato DNA in females and 29 nymphs than males. This study provides a useful tool to detect and to identify plant food sources 30 of arthropods and to evaluate crop colonization from surrounding vegetation in conservation 31 biological control programs.
Resumo:
In this paper the two main drawbacks of the heat balance integral methods are examined. Firstly we investigate the choice of approximating function. For a standard polynomial form it is shown that combining the Heat Balance and Refined Integral methods to determine the power of the highest order term will either lead to the same, or more often, greatly improved accuracy on standard methods. Secondly we examine thermal problems with a time-dependent boundary condition. In doing so we develop a logarithmic approximating function. This new function allows us to model moving peaks in the temperature profile, a feature that previous heat balance methods cannot capture. If the boundary temperature varies so that at some time t & 0 it equals the far-field temperature, then standard methods predict that the temperature is everywhere at this constant value. The new method predicts the correct behaviour. It is also shown that this function provides even more accurate results, when coupled with the new CIM, than the polynomial profile. Analysis primarily focuses on a specified constant boundary temperature and is then extended to constant flux, Newton cooling and time dependent boundary conditions.
Resumo:
L'evolució ens els últims decennis de les possibilitats relacionades amb les tecnologies de la informació han provocat l'aparició de diferents camps, entre ells l'anomenat “recuperació de música basant-se en el contingut”, que tracta de calcular la similitud entre diferents sons. En aquest projecte hem fet una recerca sobre els diferents mètodes que existeixen avui en dia, i posteriorment n'hem comparat tres, un basat en característiques del so, un basat en la transformada discreta del cosinus, i un que combina els dos anteriors. Els resultats han mostrat, que el basat en la transformada de Fourier és el més fiable.
Resumo:
Report for the scientific sojourn at the University of Bern, Swiss, from Mars until June 2008. Writer identification consists in determining the writer of a piece of handwriting from a set of writers. Even though an important amount of compositions contains handwritten text in the music scores, the aim of the work is to use only music notation to determine the author. It’s been developed two approaches for writer identification in old handwritten music scores. The methods proposed extract features from every music line, and also features from a texture image of music symbols. First of all, the music sheet is first preprocessed for obtaining a binarized music score without the staff lines. The classification is performed using a k-NN classifier based on Euclidean distance. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving encouraging identification rates.
Resumo:
DNA methylation has an important impact on normal cell physiology, thus any defects in this mechanism may be related to the development of various diseases In this project we are interested in identifying epigeneticaliy modified genes, in general controlled by processes related to the DNA methylation, by means of a new strategy combining protomic and genomic analyses. First, the two Dimensional-Difference Gel Electrophoresis (2-DIGE) protein analyses of extracts obtained from HCT-116 wt and double knockout for DNMT1 and DNMT3b (DKO) cells revealed 34 proteins overexpressed in the condition of DNMTs depletion. From five genes with higher transcript lavels in DKO cells, comparing with HCT-116 wt. oniy AKR1B1, UCHLl and VIM are melhylated in HCT-116. As expected. the DNA methvlation 1s lost in DKO cells. The rneth,vl ation of VIM and UCHLl promoters in some cancer samples has already been repaired, thus further studies has been focused on AKRlBI. AKR1B1 expression due lo DNA methyiaton of promoter region seems to occur specilfically in the colon cancer cell Iines. which was confirmed in the DNA rnethylation status and expression analyses. performed on 32 different cancer cell lines (including colon, breast, lymphoma, leukemia, neuroblastoma, glioma and lung cancer cell Iines) as well as normal colon and normal lymphocytes samples. AKRIBI expression after treatments with DNA demethvlating agent (AZA) was rescued in 5 coloncancer cell lines (including genetic regulation of the candidate gene. The methylation status of the rest of the genes identified in proteomic analysis was checked by methylation specific PCR (MSP) experiment and all appeared to be unmethylated. The similar research has been done also bv means of Mecp2-null mouse model For 14 selected candidate genes the analyses of expression leveis, methylation Status and MeCP2 interaction with promoters are currently being performed.
Resumo:
When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system