154 resultados para Crowd density estimation
Resumo:
La estimación de la media de la densidad de pulgones en alfalfa basada en conteos de campo se compara con el muestreo presencia-ausencia. Se obtuvieron 21 muestras aleatorias formadas por 75 tallos obtenidos en campos comerciales de alfalfa en Lleida (valle del Ebro) con el objetivo de predecir la estimación de la densidad media de pulgones (û) a partir de la estimación de la proporción de tallos infestados (p). La relación empírica entre û y su varianza muestral utilizando como modelo la ley potencial de Taylor es satisfactoria (r2 = 0,98). La relación empírica entre p y su varianza muestral es prácticamente binomial. Finalmente, la relación empírica entre û y p a partir de la regresión lineal entre ln(û) y ln(-ln p) fue satisfactoria (r2 = 0,94). A partir del muestreo presencia-ausencia es posible estimar densidades medias de pulgones de hasta unos 20 pulgones por tallo.
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.
Resumo:
Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
A performance lower bound for quadratic timing recovery accounting for the symbol transition density
Resumo:
The symbol transition density in a digitally modulated signal affects the performance of practical synchronization schemes designed for timing recovery. This paper focuses on the derivation of simple performance limits for the estimation of the time delay of a noisy linearly modulated signal in the presence of various degrees of symbol correlation produced by the varioustransition densities in the symbol streams. The paper develops high- and low-signal-to-noise ratio (SNR) approximations of the so-called (Gaussian) unconditional Cramér–Rao bound (UCRB),as well as general expressions that are applicable in all ranges of SNR. The derived bounds are valid only for the class of quadratic, non-data-aided (NDA) timing recovery schemes. To illustrate the validity of the derived bounds, they are compared with the actual performance achieved by some well-known quadratic NDA timing recovery schemes. The impact of the symbol transitiondensity on the classical threshold effect present in NDA timing recovery schemes is also analyzed. Previous work on performancebounds for timing recovery from various authors is generalized and unified in this contribution.
Resumo:
L'anàlisi de la densitat urbana és utilitzada per examinar la distribució espacial de la població dins de les àrees urbanes, i és força útil per planificar els serveis públics. En aquest article, s'estudien setze formes funcionals clàssiques de la relació existent entre la densitat i la distancia en la regió metropolitana de Barcelona i els seus onze subcentres.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
The last 20 years have seen a significant evolution in the literature on horizontal inequity (HI) and have generated two major and "rival" methodological strands, namely, classical HI and reranking. We propose in this paper a class of ethically flexible tools that integrate these two strands. This is achieved using a measure of inequality that merges the well-known Gini coefficient and Atkinson indices, and that allows a decomposition of the total redistributive effect of taxes and transfers in a vertical equity effect and a loss of redistribution due to either classical HI or reranking. An inequality-change approach and a money-metric cost-of-inequality approach are developed. The latter approach makes aggregate classical HI decomposable across groups. As in recent work, equals are identified through a nonparametric estimation of the joint density of gross and net incomes. An illustration using Canadian data from 1981 to 1994 shows a substantial, and increasing, robust erosion of redistribution attributable both to classical HI and to reranking, but does not reveal which of reranking or classical HI is more important since this requires a judgement that is fundamentally normative in nature.
Resumo:
The presence of subcentres cannot be captured by an exponential function. Cubic spline functions seem more appropriate to depict the polycentricity pattern of modern urban systems. Using data from Barcelona Metropolitan Region, two possible population subcentre delimitation procedures are discussed. One, taking an estimated derivative equal to zero, the other, a density gradient equal to zero. It is argued that, in using a cubic spline function, a delimitation strategy based on derivatives is more appropriate than one based on gradients because the estimated density can be negative in sections with very low densities and few observations, leading to sudden changes in estimated gradients. It is also argued that using as a criteria for subcentre delimitation a second derivative with value zero allow us to capture a more restricted subcentre area than using as a criteria a first derivative zero. This methodology can also be used for intermediate ring delimitation.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
La llúdriga (Lutra lutra) va desaparèixer de la conca de la Tordera a causa de la pressió humana sobre ella i el seu hàbitat. Recentment les seves poblacions s’estan recuperant a las conques nord de Catalunya. En aquest context es presenta els resultats dels anàlisis dels requeriments socioecològics de la llúdriga en l’afluent del riu Tordera de la riera d’Arbúcies: qualitat del bosc de ribera, valoració de la contaminació de l’aigua i l’anàlisi de les poblacions d’ictiofauna, a més de l’estimació de l’espècie més abundant. L’evolució de l’ús i cobertes del sòl del sòl mostren que existeix una tendència augment en las masses forestals, zones urbanitzades i infraestructures; també una disminució del camps de cultiu, fruiters i vinyes. La qualitat del bosc de ribera es va valorar a partir del l’índex QBR, obtenint que la màxima qualitat es localitza en el tram alt, disminuint a mesura que s’apropa a la desembocadura al Tordera. La contaminació de l’aigua a estat valorada, per una banda analitzant la qualitat biològica a partir dels índexs IPS i BMWPC, obtenint que la qualitat de l’aigua disminueix a mesura que transcorre riu avall, segons l’IPS. Amb el BMWPC es troba que existeix una recuperació de la qualitat de l’aigua en l’últim tram. Per altra banda s’han analitzat el compostos químics que afecten a la llúdriga, obtenint que les concentracions d’aquest no són rellevants a l’aigua. S’ha analitzat l’estructura de la població d’ictiofauna present, trobant que el Barbus meridionalis és l’espècie més abundant en tots el trams, a més d’augmentar en captures a mesura que l’aigua s’apropa al aiguabarreig amb la Tordera. S’ha estimat la biomassa present d’aquesta espècie, concloent que es suficient per mantenir una població no gaire densa de llúdrigues, inferior a 0,15 individus per kilòmetre de riu.
Resumo:
This paper aims at providing a Bayesian parametric framework to tackle the accessibility problem across space in urban theory. Adopting continuous variables in a probabilistic setting we are able to associate with the distribution density to the Kendall's tau index and replicate the general issues related to the role of proximity in a more general context. In addition, by referring to the Beta and Gamma distribution, we are able to introduce a differentiation feature in each spatial unit without incurring in any a-priori definition of territorial units. We are also providing an empirical application of our theoretical setting to study the density distribution of the population across Massachusetts.