974 resultados para Point estimation
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.
Resumo:
Conventional Hidden Markov models generally consist of a Markov chain observed through a linear map corrupted by additive noise. This general class of model has enjoyed a huge and diverse range of applications, for example, speech processing, biomedical signal processing and more recently quantitative finance. However, a lesser known extension of this general class of model is the so-called Factorial Hidden Markov Model (FHMM). FHMMs also have diverse applications, notably in machine learning, artificial intelligence and speech recognition [13, 17]. FHMMs extend the usual class of HMMs, by supposing the partially observed state process is a finite collection of distinct Markov chains, either statistically independent or dependent. There is also considerable current activity in applying collections of partially observed Markov chains to complex action recognition problems, see, for example, [6]. In this article we consider the Maximum Likelihood (ML) parameter estimation problem for FHMMs. Much of the extant literature concerning this problem presents parameter estimation schemes based on full data log-likelihood EM algorithms. This approach can be slow to converge and often imposes heavy demands on computer memory. The latter point is particularly relevant for the class of FHMMs where state space dimensions are relatively large. The contribution in this article is to develop new recursive formulae for a filter-based EM algorithm that can be implemented online. Our new formulae are equivalent ML estimators, however, these formulae are purely recursive and so, significantly reduce numerical complexity and memory requirements. A computer simulation is included to demonstrate the performance of our results. © Taylor & Francis Group, LLC.
Resumo:
Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one year post-fracture. Although various preventative therapies are available, patient selection is difficult. The current state-of-the-art risk assessment tool (FRAX) ignores focal structural defects, such as cortical bone thinning, a critical component in characterizing hip fragility. Cortical thickness can be measured using CT, but this is expensive and involves a significant radiation dose. Instead, Dual-Energy X-ray Absorptiometry (DXA) is currently the preferred imaging modality for assessing hip fracture risk and is used routinely in clinical practice. Our ambition is to develop a tool to measure cortical thickness using multi-view DXA instead of CT. In this initial study, we work with digitally reconstructed radiographs (DRRs) derived from CT data as a surrogate for DXA scans: this enables us to compare directly the thickness estimates with the gold standard CT results. Our approach involves a model-based femoral shape reconstruction followed by a data-driven algorithm to extract numerous cortical thickness point estimates. In a series of experiments on the shaft and trochanteric regions of 48 proximal femurs, we validated our algorithm and established its performance limits using 20 views in the range 0°-171°: estimation errors were 0:19 ± 0:53mm (mean +/- one standard deviation). In a more clinically viable protocol using four views in the range 0°-51°, where no other bony structures obstruct the projection of the femur, measurement errors were -0:07 ± 0:79 mm. © 2013 SPIE.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
In the biological sciences, stereological techniques are frequently used to infer changes in structural parameters (volume fraction, for example) between samples from different populations or subject to differing treatment regimes. Non-homogeneity of these parameters is virtually guaranteed, both between experimental animals and within the organ under consideration. A two-stage strategy is then desirable, the first stage involving unbiased estimation of the required parameter, separately for each experimental unit, the latter being defined as a subset of the organ for which homogeneity can reasonably be assumed. In the second stage, these point estimates are used as data inputs to a hierarchical analysis of variance, to distinguish treatment effects from variability between animals, for example. Techniques are therefore required for unbiased estimation of parameters from potentially small numbers of sample profiles. This paper derives unbiased estimates of linear properties in one special case—the sampling of spherical particles by transmission microscopy, when the section thickness is not negligible and the resulting circular profiles are subject to lower truncation. The derivation uses the general integral equation formulation of Nicholson (1970); the resulting formulae are simplified, algebraically, and their efficient computation discussed. Bias arising from variability in slice thickness is shown to be negligible in typical cases. The strategy is illustrated for data examining the effects, on the secondary lysosomes in the digestive cells, of exposure of the common mussel to hydrocarbons. Prolonged exposure, at 30 μg 1−1 total oil-derived hydrocarbons, is seen to increase the average volume of a lysosome, and the volume fraction that lysosomes occupy, but to reduce their number.
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.
Resumo:
Consumption of milk and dairy products is considered one of the main routes of human exposure to Mycobacterium avium subsp. paratuberculosis (MAP). Quantitative data on MAP load in raw cows’ milk are essential starting point for exposure assessment. Our study provides this information on a regional scale, estimating the load of MAP in bulk tank milk (BTM) produced in Emilia-Romagna region (Italy). The survey was carried out on 2934 BTM samples (88.6% of the farms herein present) using two different target sequences for qPCR (f57 and IS900). Data about the performances of both qPCRs are also reported, highlighting the superior sensitivity of IS900-qPCR. Seven hundred and eighty-nine samples tested MAP-positive (apparent prevalence 26.9%) by IS900 qPCR. However, only 90 of these samples were quantifiable by qPCR. The quantifiable samples contained a median load of 32.4 MAP cells mL−1 (and maximum load of 1424 MAP cells mL−1). This study has shown that a small proportion (3.1%) of BTM samples from Emilia-Romagna region contained MAP in excess of the limit of detection (1.5 × 101 MAP cells mL−1), indicating low potential exposure for consumers if the milk subsequently undergoes pasteurization or if it is destined to typical hard cheese production.
Resumo:
In this paper, we consider the uplink of a single-cell multi-user single-input multiple-output (MU-SIMO) system with in-phase and quadrature-phase imbalance (IQI). Particularly, we investigate the effect of receive (RX) IQI on the performance of MU-SIMO systems with large antenna arrays employing maximum-ratio combining (MRC) receivers. In order to study how IQI affects channel estimation, we derive a new channel estimator for the IQI-impaired model and show that the higher the value of signal-to-noise ratio (SNR) the higher the impact of IQI on the spectral efficiency (SE). Moreover, a novel pilot-based joint estimator of the augmented MIMO channel matrix and IQI coefficients is described and then, a low-complexity IQI compensation scheme is proposed which is based on the
IQI coefficients’ estimation and it is independent of the channel gain. The performance of the proposed compensation scheme is analytically evaluated by deriving a tractable approximation of the ergodic SE assuming transmission over Rayleigh fading channels with large-scale fading. Furthermore, we investigate how many MSs should be scheduled in massive multiple-input multiple-output (MIMO) systems with IQI and show that the highest SE loss occurs at the optimal operating point. Finally,
by deriving asymptotic power scaling laws, and proving that the SE loss due to IQI is asymptotically independent of the number of BS antennas, we show that massive MIMO is resilient to the effect of RX IQI.
Resumo:
This work investigates new channel estimation schemes for the forthcoming and future generation of cellular systems for which cooperative techniques are regarded. The studied cooperative systems are designed to re-transmit the received information to the user terminal via the relay nodes, in order to make use of benefits such as high throughput, fairness in access and extra coverage. The cooperative scenarios rely on OFDM-based systems employing classical and pilot-based channel estimators, which were originally designed to pointto-point links. The analytical studies consider two relaying protocols, namely, the Amplifyand-Forward and the Equalise-and-Forward, both for the downlink case. The relaying channels statistics show that such channels entail specific characteristics that comply to a proper filter and equalisation designs. Therefore, adjustments in the estimation process are needed in order to obtain the relay channel estimates, refine these initial estimates via iterative processing and obtain others system parameters that are required in the equalisation. The system performance is evaluated considering standardised specifications and the International Telecommunication Union multipath channel models.
Resumo:
Tese dout., Engenharia electrónica e computação - Processamento de sinal, Universidade do Algarve, 2008
Resumo:
In this paper, an open source solution for measurement of temperature and ultrasonic signals (RF-lines) is proposed. This software is an alternative to the expensive commercial data acquisition software, enabling the user to tune applications to particular acquisition architectures. The collected ultrasonic and temperature signals were used for non-invasive temperature estimation using neural networks. The existence of precise temperature estimators is an essential point aiming at the secure and effective applica tion of thermal therapies in humans. If such estimators exist then effective controllers could be developed for the therapeutic instrumentation. In previous works the time-shift between RF-lines echoes were extracted, and used for creation of neural networks estimators. The obtained estimators successfully represent the temperature in the time-space domain, achieving a maximum absolute error inferior to the threshold value defined for hyperthermia/diathermia applications.
Resumo:
Aiming at time-spatial characterization of tissue temperature when ultrasound is applied for thermal therapeutic proposes two experiments were developed considering gel-based phantoms, one of them including an artificial blood vessel. The blood vessel was mimicking blood flow in a common carotid artery. For each experiment phantoms were heated by a therapeutic ultrasound (TU) device emitting different intensities (0.5, 1, 1.5, 1.8 W/cm2). Temperature was monitored by thermocouples and estimated through imaging ultrasound transducer's signals within specific special points inside the phantom. The temperature estimation procedure was based on temporal echo-shifts (TES), computed based on echo-shifts collected through image ultrasound (IU) transducer. Results show that TES is a reliable non-invasive method of temperature estimation, regardless the TU intensities applied. Presence of a pulsatile blood flow vessel in the focal point of TU transducer reduces thermal variation in more than 50%, also affecting the temperature variation in the surrounding area. In other words, vascularized tissues require longer ultrasound thermal therapeutic sessions or higher TU intensities and inclusion of IU in the therapeutic procedure enables non-invasive monitoring of temperature. © 2013 IEEE.
Resumo:
In the last few years, the number of systems and devices that use voice based interaction has grown significantly. For a continued use of these systems, the interface must be reliable and pleasant in order to provide an optimal user experience. However there are currently very few studies that try to evaluate how pleasant is a voice from a perceptual point of view when the final application is a speech based interface. In this paper we present an objective definition for voice pleasantness based on the composition of a representative feature subset and a new automatic voice pleasantness classification and intensity estimation system. Our study is based on a database composed by European Portuguese female voices but the methodology can be extended to male voices or to other languages. In the objective performance evaluation the system achieved a 9.1% error rate for voice pleasantness classification and a 15.7% error rate for voice pleasantness intensity estimation.
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.