21 resultados para Two stages
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In this paper we propose a new approach for tonic identification in Indian art music and present a proposal for acomplete iterative system for the same. Our method splits the task of tonic pitch identification into two stages. In the first stage, which is applicable to both vocal and instrumental music, we perform a multi-pitch analysis of the audio signal to identify the tonic pitch-class. Multi-pitch analysisallows us to take advantage of the drone sound, which constantlyreinforces the tonic. In the second stage we estimate the octave in which the tonic of the singer lies and is thusneeded only for the vocal performances. We analyse the predominant melody sung by the lead performer in order to establish the tonic octave. Both stages are individually evaluated on a sizable music collection and are shown toobtain a good accuracy. We also discuss the types of errors made by the method.Further, we present a proposal for a system that aims to incrementally utilize all the available data, both audio and metadata in order to identify the tonic pitch. It produces a tonic estimate and a confidence value, and is iterative in nature. At each iteration, more data is fed into the systemuntil the confidence value for the identified tonic is above a defined threshold. Rather than obtain high overall accuracy for our complete database, ultimately our goal is to develop a system which obtains very high accuracy on a subset of the database with maximum confidence.
Resumo:
Els bacteris són la forma dominant de vida del planeta: poden sobreviure en medis molt adversos, i en alguns casos poden generar substàncies que quan les ingerim ens són tòxiques. La seva presència en els aliments fa que la microbiologia predictiva sigui un camp imprescindible en la microbiologia dels aliments per garantir la seguretat alimentària. Un cultiu bacterià pot passar per quatre fases de creixement: latència, exponencial, estacionària i de mort. En aquest treball s’ha avançat en la comprensió dels fenòmens intrínsecs a la fase de latència, que és de gran interès en l’àmbit de la microbiologia predictiva. Aquest estudi, realitzat al llarg de quatre anys, s’ha abordat des de la metodologia Individual-based Modelling (IbM) amb el simulador INDISIM (INDividual DIScrete SIMulation), que ha estat millorat per poder fer-ho. INDISIM ha permès estudiar dues causes de la fase de latència de forma separada, i abordar l’estudi del comportament del cultiu des d’una perspectiva mesoscòpica. S’ha vist que la fase de latència ha de ser estudiada com un procés dinàmic, i no definida per un paràmetre. L’estudi de l’evolució de variables com la distribució de propietats individuals entre la població (per exemple, la distribució de masses) o la velocitat de creixement, han permès distingir dues etapes en la fase de latència, inicial i de transició, i aprofundir en la comprensió del que passa a nivell cel•lular. S’han observat experimentalment amb citometria de flux diversos resultats previstos per les simulacions. La coincidència entre simulacions i experiments no és trivial ni casual: el sistema estudiat és un sistema complex, i per tant la coincidència del comportament al llarg del temps de diversos paràmetres interrelacionats és un aval a la metodologia emprada en les simulacions. Es pot afirmar, doncs, que s’ha verificat experimentalment la bondat de la metodologia INDISIM.
Resumo:
Total lipid content and fatty acid (FA) composition of common dentex eggs spawned at different times and larvae reared under different culture conditions until 40 days post hatch (dph) were analysed in order to get a general pattern of lipid composition during larval development. Results were grouped according to the developmental stage of the larvae instead of age in dph. Saturated and monounsaturated fatty acids decreased along larval development, while polyunsaturated fatty acid (PUFA) content increased. The ratio of docosahexaenoic acid (DHA) / eicosapentaenoic acid (EPA) shifted from 4 – 5 in early developmental stages to lower than 1 after metamorphosis. Results suggest a subdivision of the larval development into two stages of opposite FA requirements.
Resumo:
La predicción de incendios forestales es uno de los grandes retos de la comunidad científica debido al impacto medioambiental, humano y económico que tienen en la sociedad. El comportamiento de este fenómeno es difícil de modelar debido a la gran cantidad de variables que intervienen y la dificultad que implica su correcta medición. Los simuladores de fuego son herramientas muy útiles pero, actualmente, los resultados que obtenemos tienen un alto grado de imprecisión. Desde nuestro grupo se ha trabajado en la predicción en dos etapas, donde antes de realizar cualquier predicción, se incorpora una etapa de ajuste de los parámetros de entrada para obtener mejores predicciones. Pese a la mejora que supone este nuevo paradigma de predicción, las simulaciones sobre incendios reales tienen un alto grado de error por el efecto de las condiciones meteorológicas que, usualmente, varían de manera notable durante el transcurso de la simulación. Uno de los factores más determinantes en el comportamiento de un incendio, junto con las características del terreno, es el viento. Los modelos de predicción son extremadamente sensibles al cambio en los componentes de dirección y velocidad del viento por lo que cualquier mejora que podamos introducir para mejorar la calidad de estas componentes influye directamente en la calidad de la predicción. Nuestro sistema de predicción utiliza la dirección y velocidad del viento de forma global en todo el terreno, y lo que proponemos con este trabajo es introducir un modelo de vientos que nos permita generar vientos locales en todas las celdas en las que se divide el terreno. Este viento local dependerá del viento general y de las características del terreno de dichas celdas. Consideramos que la utilización de un viento general no es suficiente para realizar una buena predicción del comportamiento de un incendio y hemos comprobado que la inclusión de un simulador de campo de vientos en nuestro sistema puede llegar a mejorar nuestras predicciones considerablemente. Los resultados obtenidos en los experimentos sintéticos que hemos realizado nos hacen ser optimistas, puesto que consideramos que la inclusión de componentes de viento locales permitirá mejorar nuestras predicciones en incendios reales.
Construcció d'un Sistema d'Informació Geogràfica per a la gestió de rutes en camins no cartografiats
Resumo:
La resolució de la problemàtica exposada al llarg d'aquest treball s'aborda en dues fases. Una primera constitueix un estudi teòric dels diversos conceptes implicats: SIG, cartografia, geodèsia i GPS, amb especial atenció al tipus de receptors GPS disponibles al mercat. La segona part és eminentment pràctica i està formada per l'exposició de la solució adoptada per a respondre a les necessitats de l'empresa
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This paper studies the extent to which social networks influence the employment stability and wages of immigrants in Spain. By doing so, I consider an aspect that has not been previously addressed in the empirical literature, namely the connection between immigrants' social networks and labor market outcomes in Spain. For this purpose, I use micro-data from the National Immigrant Survey carried out in 2007. The analysis is conducted in two stages. First, the impact of social networks on the probability of keeping the first job obtained in Spain is studied through a multinomial logit regression. Second, quantile regressions are used to estimate a wage equation. The empirical results suggest that once the endogeneity problem has been accounted for, immigrants' social networks influence their labor market outcomes. On arrival, immigrants experience a mismatch in the labor market. In addition, different effects of social networks on wages by gender and wage distribution are found. While contacts on arrival and informal job access mechanisms positively influence women's wages, a wage penalty is observed for men.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g.,rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperaturedependent viscosity. Results show that average RMS errors are ∼5%–7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck’09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ∼10°C–15°C effluent reach ∼5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2‐D flows where background objects have a small spatial scale, such as sand or gravel
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
The aim of this communication is to describe the results of a pilot project for the assessment of the transversal competency "the capacity for learning and responsibility". This competency is centred on the capacity for the analysis, synthesis, overview, and practical application of newly acquired knowledge. It is proposed by the University of Barcelona in its undergraduate degree courses,through multidisciplinary teaching teams. The goal of the pilot project is to evaluate this competency.We worked with a group of students in a first-year Business Degree maths course, during the firstsemester of the 2012/2013 academic year. The development of the project was in two stages: (i)design of a specific task to share with the same students in the following semester when the subjectwould be economic history; and (ii) the elaboration of an evaluation rubric in which we defined thecontent, the aspects to evaluate, the evaluation criteria, and the marking scale. The attainment of theexpectations of quality on the specific task was scored following this rubric, which provided a singlebasis for the precise and fair assessment by the instructor and for the students' own self-evaluation.We conclude by describing the main findings of the experience. There particularly stood out the highscore in the students' self-evaluation given to one aspect of the competency – their capacity forlearning – in stark contrast to their instructor's quite negative evaluation. This means that we have towork both to improve teaching practice and to identify the optimal competency evaluationmethodology.
Resumo:
In the last years there has been an increasing demand of a variety of logical systems, prompted mostly by applications of logic in AI, logic programming and other related areas. Labeled Deductive Systems (LDS) were developed as a flexible methodology to formalize such a kind of complex logical systems. In the last decade, defeasible argumentation has proven to be a confluence point for many approaches to formalizing commonsense reasoning. Different formalisms have been developed, many of them sharing common features. This paper presents a formalization of an LDS for defensible argumentation, in which the main issues concerning defeasible argumentation are captured within a unified logical framework. The proposed framework is defined in two stages. First, defeasible inference will be formalized by characterizing an argumentative LDS. That system will be then extended in order to capture conflict among arguments using a dialectical approach. We also present some logical properties emerging from the proposed framework, discussing also its semantical characterization.
Resumo:
The linear prediction coding of speech is based in the assumption that the generation model is autoregresive. In this paper we propose a structure to cope with the nonlinear effects presents in the generation of the speech signal. This structure will consist of two stages, the first one will be a classical linear prediction filter, and the second one will model the residual signal by means of two nonlinearities between a linear filter. The coefficients of this filter are computed by means of a gradient search on the score function. This is done in order to deal with the fact that the probability distribution of the residual signal still is not gaussian. This fact is taken into account when the coefficients are computed by a ML estimate. The algorithm based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics and is based on blind deconvolution of Wiener systems [1]. Improvements in the experimental results with speech signals emphasize on the interest of this approach.
Resumo:
This article is part of a research focusing on the process of transition to adulthood of young people with intellectual disabilities. Specifically, this study analyses transition partnership programs, as the professionals involved in them see them. The information is obtained in two stages: in the first stage 45 interviews to professionals working in this field are conducted. In the second stage we develop a study applying the Delphi method in which two panels of experts, the first one with educational professionals and the second one with professionals working with afters chool services, were asked about several topics. The results show a lack of continuity underlying the actions taken in support of young people with ID during the transition process. Insufficient information and collaboration among services and professionals and a lack of leadership are the main problems perceived by professionals. The study helps to identify problems in the transition partnership programs and establishes actions in order to enhance the transition process