896 resultados para density estimation, thresholding, wavelet bases, Besov space
Resumo:
In this paper we prove several results on the existence of analytic functions on an infinite dimensional real Banach space which are bounded on some given collection of open sets and unbounded on others. In addition, we also obtain results on the density of some subsets of the space of all analytic functions for natural locally convex topologies on this space. RESUMEN. Los autores demuestran varios resultados de existencia de funciones analíticas en espacios de Banach reales de dimensión infinita que están acotadas en un colección de subconjuntos abiertos y no acotadas en los conjuntos de otra colección. Además, se demuestra la densidad de ciertos subconjuntos de funciones analíticas para varias topologías localmente convexas.
Resumo:
In the “selective” cholesteryl ester (CE) uptake process, surface-associated lipoproteins [high density lipoprotein (HDL) and low density lipoprotein] are trapped in the space formed between closely apposed surface microvilli (microvillar channels) in hormone-stimulated steroidogenic cells. This is the same location where an HDL receptor (SR-BI) is found. In the current study, we sought to understand the relationship between SR-BI and selective CE uptake in a heterologous insect cell system. Sf9 (Spodoptera frugiperda) cells overexpressing recombinant SR-BI were examined for (i) SR-BI protein by Western blot analysis and light or electron immunomicroscopy, and (ii) selective lipoprotein CE uptake by the use of radiolabeled or fluorescent (BODIPY-CE)-labeled HDL. Noninfected or infected control Sf9 cells do not express SR-BI, show microvillar channels, or internalize CEs. An unexpected finding was the induction of a complex channel system in Sf9 cells expressing SR-BI. SR-BI-expressing cells showed many cell surface double-membraned channels, immunogold SR-BI, apolipoprotein (HDL) labeling of the channels, and high levels of selective HDL-CE uptake. Thus, double-membraned channels can be induced by expression of recombinant SR-BI in a heterologous system, and these specialized structures facilitate both the binding of HDL and selective HDL-CE uptake.
Resumo:
Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information —synthetic and 3D scanned data— were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.
Resumo:
This paper examines the determinants of foreign direct investment (FDI) under free trade agreements (FTAs) from a new institutional perspective. First, the determinants of FDI are theoretically discussed from a new institutional perspective. Then, FDI is statistically analyzed at the aggregate level. Kernel density estimation of firm-size reveals some evidence of "structural changes" after FTAs, as characterized by the investing firms' paid-up capital stock. Statistical tests of the average and variance of the size distribution confirm this in the case of FTAs with Asian partner countries. For FTAs with South American partner countries, the presence of FTAs seems to promote larger-scale FDIs. These results remain correlational instead of causal, and more statistical analyses would be needed to infer causality. Policy implications suggest that participants should consider "institutional" aspects of FTAs, that is, the size matters as a determinant of FDI. Future work along this line is needed to study "firm heterogeneity."
Resumo:
Acoustic estimates of herring and blue whiting abundance were obtained during the surveys using the Simrad ER60 scientific echosounder. The allocation of NASC-values to herring, blue whiting and other acoustic targets were based on the composition of the trawl catches and the appearance of echo recordings. To estimate the abundance, the allocated NASC -values were averaged for ICES-squares (0.5° latitude by 1° longitude). For each statistical square, the unit area density of fish (rA) in number per square nautical mile (N*nm-2) was calculated using standard equations (Foote et al., 1987; Toresen et al., 1998). To estimate the total abundance of fish, the unit area abundance for each statistical square was multiplied by the number of square nautical miles in each statistical square and then summed for all the statistical squares within defined subareas and over the total area. Biomass estimation was calculated by multiplying abundance in numbers by the average weight of the fish in each statistical square then summing all squares within defined subareas and over the total area. The Norwegian BEAM soft-ware (Totland and Godø 2001) was used to make estimates of total biomass.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
This paper investigates the performance of EASI algorithm and the proposed EKENS algorithm for linear and nonlinear mixtures. The proposed EKENS algorithm is based on the modified equivariant algorithm and kernel density estimation. Theory and characteristic of both the algorithms are discussed for blind source separation model. The separation structure of nonlinear mixtures is based on a nonlinear stage followed by a linear stage. Simulations with artificial and natural data demonstrate the feasibility and good performance of the proposed EKENS algorithm.
Resumo:
An emerging issue in the field of astronomy is the integration, management and utilization of databases from around the world to facilitate scientific discovery. In this paper, we investigate application of the machine learning techniques of support vector machines and neural networks to the problem of amalgamating catalogues of galaxies as objects from two disparate data sources: radio and optical. Formulating this as a classification problem presents several challenges, including dealing with a highly unbalanced data set. Unlike the conventional approach to the problem (which is based on a likelihood ratio) machine learning does not require density estimation and is shown here to provide a significant improvement in performance. We also report some experiments that explore the importance of the radio and optical data features for the matching problem.
Resumo:
Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated.
Resumo:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
Resumo:
For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.
Resumo:
Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
2000 Mathematics Subject Classification: 65C05
Resumo:
Typical Double Auction (DA) models assume that trading agents are one-way traders. With this limitation, they cannot directly reflect the fact individual traders in financial markets (the most popular application of double auction) choose their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Based on experiments under both static and dynamic settings, we find that the allocative efficiency of a static continuous BDA market comes from rational selection of trading directions and is negatively related to the intelligence of trading strategies. Moreover, we introduce Kernel trading strategy designed based on probability density estimation for general DA market. Our experiments show it outperforms some intelligent DA market trading strategies. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.