973 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The article focuses on the upper secondary matriculation examination in Finland as a school leaving and university entrance examination. The presented research addresses the question of whether increased choice of the subject-specific examinations has the potential to undermine the comparability of examination results and to direct students’ choices not only in the examination but already beforehand at school. The authors refer to Finland’s tradition of more than 160 years of a national examination connecting the academic track of upper secondary schools with universities. The authors explain the Finnish system by describing the adoption of a course-based (vs. class- or year-based) curriculum for the three-year upper secondary education and the subsequent reforms in the matriculation examination. This increases students’ choices considerably with regard to the subject-specific exams included in the examination (a minimum of four). As a result, high-achieving students compete against each other in the more demanding subjects while the less able share the same normal distribution of grades in the less demanding subjects. As a consequence, students tend to strategic exam-planning, which in turn affects their study choices at school, often to the detriment of the more demanding subjects and, subsequently, of students’ career opportunities, endangering the traditional national objective of an all-round pre-academic upper secondary education. This contribution provides an overview of Finnish upper secondary education and of the matriculation examination (cf. Klein, 2013) while studying three separate but related issues by using data from several years of Finnish matriculation results: the relation of the matriculation examination and the curriculum; the problems of comparability vis-à-vis university entry due to the increased choice within the examination; the relations between students’ examination choices and their course selection and achievement during upper secondary school. (DIPF/Orig.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: High-throughput SNP genotyping has become an essential requirement for molecular breeding and population genomics studies in plant species. Large scale SNP developments have been reported for several mainstream crops. A growing interest now exists to expand the speed and resolution of genetic analysis to outbred species with highly heterozygous genomes. When nucleotide diversity is high, a refined diagnosis of the target SNP sequence context is needed to convert queried SNPs into high-quality genotypes using the Golden Gate Genotyping Technology (GGGT). This issue becomes exacerbated when attempting to transfer SNPs across species, a scarcely explored topic in plants, and likely to become significant for population genomics and inter specific breeding applications in less domesticated and less funded plant genera. Results: We have successfully developed the first set of 768 SNPs assayed by the GGGT for the highly heterozygous genome of Eucalyptus from a mixed Sanger/454 database with 1,164,695 ESTs and the preliminary 4.5X draft genome sequence for E. grandis. A systematic assessment of in silico SNP filtering requirements showed that stringent constraints on the SNP surrounding sequences have a significant impact on SNP genotyping performance and polymorphism. SNP assay success was high for the 288 SNPs selected with more rigorous in silico constraints; 93% of them provided high quality genotype calls and 71% of them were polymorphic in a diverse panel of 96 individuals of five different species. SNP reliability was high across nine Eucalyptus species belonging to three sections within subgenus Symphomyrtus and still satisfactory across species of two additional subgenera, although polymorphism declined as phylogenetic distance increased. Conclusions: This study indicates that the GGGT performs well both within and across species of Eucalyptus notwithstanding its nucleotide diversity >= 2%. The development of a much larger array of informative SNPs across multiple Eucalyptus species is feasible, although strongly dependent on having a representative and sufficiently deep collection of sequences from many individuals of each target species. A higher density SNP platform will be instrumental to undertake genome-wide phylogenetic and population genomics studies and to implement molecular breeding by Genomic Selection in Eucalyptus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spaceborne/airborne synthetic aperture radar (SAR) systems provide high resolution two-dimensional terrain imagery. The paper proposes a technique for combining multiple SAR images, acquired on flight paths slightly separated in the elevation direction, to generate high resolution three-dimensional imagery. The technique could be viewed as an extension to interferometric SAR (InSAR) in that it generates topographic imagery with an additional dimension of resolution. The 3-D multi-pass SAR imaging system is typically characterised by a relatively short ambiguity length in the elevation direction. To minimise the associated ambiguities we exploit the relative phase information within the set of images to track the terrain landscape. The SAR images are then coherently combined, via a nonuniform DFT, over a narrow (in elevation) volume centred on the 'dominant' terrain ground plane. The paper includes a detailed description of the technique, background theory, including achievable resolution, and the results of an experimental study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and Computers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No panorama socioeconómico atual, a contenção de despesas e o corte no financiamento de serviços secundários consumidores de recursos conduzem à reformulação de processos e métodos das instituições públicas, que procuram manter a qualidade de vida dos seus cidadãos através de programas que se mostrem mais eficientes e económicos. O crescimento sustentado das tecnologias móveis, em conjunção com o aparecimento de novos paradigmas de interação pessoa-máquina com recurso a sensores e sistemas conscientes do contexto, criaram oportunidades de negócio na área do desenvolvimento de aplicações com vertente cívica para indivíduos e empresas, sensibilizando-os para a disponibilização de serviços orientados ao cidadão. Estas oportunidades de negócio incitaram a equipa do projeto a desenvolver uma plataforma de notificação de problemas urbanos baseada no seu sistema de informação geográfico para entidades municipais. O objetivo principal desta investigação foca a idealização, conceção e implementação de uma solução completa de notificação de problemas urbanos de caráter não urgente, distinta da concorrência pela facilidade com que os cidadãos são capazes de reportar situações que condicionam o seu dia-a-dia. Para alcançar esta distinção da restante oferta, foram realizados diversos estudos para determinar características inovadoras a implementar, assim como todas as funcionalidades base expectáveis neste tipo de sistemas. Esses estudos determinaram a implementação de técnicas de demarcação manual das zonas problemáticas e reconhecimento automático do tipo de problema reportado nas imagens, ambas desenvolvidas no âmbito deste projeto. Para a correta implementação dos módulos de demarcação e reconhecimento de imagem, foram feitos levantamentos do estado da arte destas áreas, fundamentando a escolha de métodos e tecnologias a integrar no projeto. Neste contexto, serão apresentadas em detalhe as várias fases que constituíram o processo de desenvolvimento da plataforma, desde a fase de estudo e comparação de ferramentas, metodologias, e técnicas para cada um dos conceitos abordados, passando pela proposta de um modelo de resolução, até à descrição pormenorizada dos algoritmos implementados. Por último, é realizada uma avaliação de desempenho ao par algoritmo/classificador desenvolvido, através da definição de métricas que estimam o sucesso ou insucesso do classificador de objetos. A avaliação é feita com base num conjunto de imagens de teste, recolhidas manualmente em plataformas públicas de notificação de problemas, confrontando os resultados obtidos pelo algoritmo com os resultados esperados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: In past decades, leishmaniasis burden has been low across Egypt; however, changing environment and land use has placed several parts of the country at risk. As a consequence, leishmaniasis has become a particularly difficult health problem, both for local inhabitants and for multinational military personnel. Methods: To evaluate coarse-resolution aspects of the ecology of leishmaniasis transmission, collection records for sandflies and Leishmania species were obtained from diverse sources. To characterize environmental variation across the country, we used multitemporal Land Surface Temperature (LST) and Normalized Difference Vegetation Index (NDVI) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) for 2005-2011. Ecological niche models were generated using MaxEnt, and results were analyzed using background similarity tests to assess whether associations among vectors and parasites (i.e., niche similarity) can be detected across broad geographic regions. Results: We found niche similarity only between one vector species and its corresponding parasite species (i.e., Phlebotomus papatasi with Leishmania major), suggesting that geographic ranges of zoonotic cutaneous leishmaniasis and its potential vector may overlap, but under distinct environmental associations. Other associations (e.g., P. sergenti with L. major) were not supported. Mapping suitable areas for each species suggested that northeastern Egypt is particularly at risk because both parasites have potential to circulate. Conclusions: Ecological niche modeling approaches can be used as a first-pass assessment of vector-parasite interactions, offering useful insights into constraints on the geography of transmission patterns of leishmaniasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Envenoming snakebites are thought to be a particularly important threat to public health worldwide, especially in rural areas of tropical and subtropical countries. The true magnitude of the public health threat posed by snakebites is unknown, making it difficult for public health officials to optimize prevention and treatment. The objective of this work was to conduct a systematic review of the literature to gather data on snakebite epidemiology in the Amazon region and describe a case series of snakebites from epidemiological surveillance in the State of Amazonas (1974-2012). Only 11 articles regarding snakebites were found. In the State of Amazonas, information regarding incidents involving snakes is scarce. Historical trends show an increasing number of cases after the second half of the 1980s. Snakebites predominated among adults (20-39 years old; 38%), in the male gender (78.9%) and in those living in rural areas (85.6%). The predominant snake envenomation type was bothropic. The incidence reported by the epidemiological surveillance in the State of Amazonas, reaching up to 200 cases/100,000 inhabitants in some areas, is among the highest annual snakebite incidence rates of any region in the world. The majority of the cases were reported in the rainy season with a case-fatality rate of 0.6%. Snakebite envenomation is a great disease burden in the State of Amazonas, representing a challenge for future investigations, including approaches to estimating incidence under-notification and case-fatality rates as well as the factors related to severity and disabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Psicologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the literature the outcome of contests is either interpreted as win probabilities or as shares of the prize. With this in mind, we examine two approaches to contest success functions. In the first we analyze the implications of contestants' incomplete information concerning the "type" of the contest administrator. While in the case of two contestants this approach can rationalize prominent contest success functions, we show that it runs into difficulties when there are more agents. Our second approach interprets contest success functions as sharing rules and establishes a connection to bargaining and claims problems which is independent of the number of contestants. Both approaches provide foundations for popular contest success functions and guidelines for the definition of new ones. Keywords: Endogenous Contests, Contest Success Function. JEL Classification: C72 (Noncooperative Games), D72 (Economic Models of Political Processes: Rent-Seeking, Elections), D74 (Conflict; Conflict Resolution; Alliances).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.