50 resultados para MAXIMUM LIKELIHOOD
Resumo:
Land cover classification is a key research field in remote sensing and land change science as thematic maps derived from remotely sensed data have become the basis for analyzing many socio-ecological issues. However, land cover classification remains a difficult task and it is especially challenging in heterogeneous tropical landscapes where nonetheless such maps are of great importance. The present study aims to establish an efficient classification approach to accurately map all broad land cover classes in a large, heterogeneous tropical area of Bolivia, as a basis for further studies (e.g., land cover-land use change). Specifically, we compare the performance of parametric (maximum likelihood), non-parametric (k-nearest neighbour and four different support vector machines - SVM), and hybrid classifiers, using both hard and soft (fuzzy) accuracy assessments. In addition, we test whether the inclusion of a textural index (homogeneity) in the classifications improves their performance. We classified Landsat imagery for two dates corresponding to dry and wet seasons and found that non-parametric, and particularly SVM classifiers, outperformed both parametric and hybrid classifiers. We also found that the use of the homogeneity index along with reflectance bands significantly increased the overall accuracy of all the classifications, but particularly of SVM algorithms. We observed that improvements in producer’s and user’s accuracies through the inclusion of the homogeneity index were different depending on land cover classes. Earlygrowth/degraded forests, pastures, grasslands and savanna were the classes most improved, especially with the SVM radial basis function and SVM sigmoid classifiers, though with both classifiers all land cover classes were mapped with producer’s and user’s accuracies of around 90%. Our approach seems very well suited to accurately map land cover in tropical regions, thus having the potential to contribute to conservation initiatives, climate change mitigation schemes such as REDD+, and rural development policies.
Resumo:
We focus on full-rate, fast-decodable space–time block codes (STBCs) for 2 x 2 and 4 x 2 multiple-input multiple-output (MIMO) transmission. We first derive conditions and design criteria for reduced-complexity maximum-likelihood (ML) decodable 2 x 2 STBCs, and we apply them to two families of codes that were recently discovered. Next, we derive a novel reduced-complexity 4 x 2 STBC, and show that it outperforms all previously known codes with certain constellations.
Resumo:
We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not performwell under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.
Resumo:
Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.
Resumo:
A new debate over the speed of convergence in per capita income across economies is going on. Cross sectional estimates support the idea of slow convergence of about two percent per year. Panel data estimates support the idea of fast convergence of five, ten or even twenty percent per year. This paper shows that, if you ``do it right'', even the panel data estimation method yields the result of slow convergence of about two percent per year.
Resumo:
In this paper we analyse the observed systematic differences incosts for teaching hospitals (THhenceforth) in Spain. Concernhas been voiced regarding the existence of a bias in thefinancing of TH s has been raised once prospective budgets arein the arena for hospital finance, and claims for adjusting totake into account the legitimate extra costs of teaching onhospital expenditure are well grounded. We focus on theestimation of the impact of teaching status on average cost. Weused a version of a multiproduct hospital cost function takinginto account some relevant factors from which to derive theobserved differences. We assume that the relationship betweenthe explanatory and the dependent variables follows a flexibleform for each of the explanatory variables. We also model theunderlying covariance structure of the data. We assumed twoqualitatively different sources of variation: random effects andserial correlation. Random variation refers to both general levelvariation (through the random intercept) and the variationspecifically related to teaching status. We postulate that theimpact of the random effects is predominant over the impact ofthe serial correlation effects. The model is estimated byrestricted maximum likelihood. Our results show that costs are 9%higher (15% in the case of median costs) in teaching than innon-teaching hospitals. That is, teaching status legitimatelyexplains no more than half of the observed difference in actualcosts. The impact on costs of the teaching factor depends on thenumber of residents, with an increase of 51.11% per resident forhospitals with fewer than 204 residents (third quartile of thenumber of residents) and 41.84% for hospitals with more than 204residents. In addition, the estimated dispersion is higher amongteaching hospitals. As a result, due to the considerable observedheterogeneity, results should be interpreted with caution. From apolicy making point of view, we conclude that since a higherrelative burden for medical training is under public hospitalcommand, an explicit adjustment to the extra costs that theteaching factor imposes on hospital finance is needed, beforehospital competition for inpatient services takes place.
Resumo:
The aim of this paper is to simulate the effects of the Spanish 1999 taxreform on the married women s labour behaviour and welfare in a partialequilibrium context. We estimate by maximum likelihood two models of laboursupply which take into account of the characteristics of the budgetconstraint. The simulation exercises suggest that the new tax can havesignificant effects on female s labour supply decisions and seems toincrease the individual s welfare.
Resumo:
We set up a dynamic model of firm investment in which liquidity constraintsenter explicity into the firm's maximization problem. The optimal policyrules are incorporated into a maximum likelihood procedure which estimatesthe structural parameters of the model. Investment is positively related tothe firm's internal financial position when the firm is relatively poor. This relationship disappears for wealthy firms, which can reach theirdesired level of investment. Borrowing is an increasing function of financial position for poor firms. This relationship is reversed as a firm's financial position improves, and large firms hold little debt.Liquidity constrained firms may be unused credits lines and the capacity toinvest further if they desire. However the fear that liquidity constraintswill become binding in the future induces them to invest only when internalresources increase.We estimate the structural parameters of the model and use them to quantifythe importance of liquidity constraints on firms' investment. We find thatliquidity constraints matter significantly for the investment decisions of firms. If firms can finance investment by issuing fresh equity, rather than with internal funds or debt, average capital stock is almost 35% higher overa period of 20 years. Transitory shocks to internal funds have a sustained effect on the capital stock. This effect lasts for several periods and ismore persistent for small firms than for large firms. A 10% negative shock to firm fundamentals reduces the capital stock of firms which face liquidityconstraints by almost 8% over a period as opposed to only 3.5% for firms which do not face these constraints.
Resumo:
Several estimators of the expectation, median and mode of the lognormal distribution are derived. They aim to be approximately unbiased, efficient, or have a minimax property in the class of estimators we introduce. The small-sample properties of these estimators are assessed by simulations and, when possible, analytically. Some of these estimators of the expectation are far more efficient than the maximum likelihood or the minimum-variance unbiased estimator, even for substantial samplesizes.
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.
Resumo:
The absolute K magnitudes and kinematic parameters of about 350 oxygen-rich Long-Period Variable stars are calibrated, by means of an up-to-date maximum-likelihood method, using HIPPARCOS parallaxes and proper motions together with radial velocities and, as additional data, periods and V-K colour indices. Four groups, differing by their kinematics and mean magnitudes, are found. For each of them, we also obtain the distributions of magnitude, period and de-reddened colour of the base population, as well as de-biased period-luminosity-colour relations and their two-dimensional projections. The SRa semiregulars do not seem to constitute a separate class of LPVs. The SRb appear to belong to two populations of different ages. In a PL diagram, they constitute two evolutionary sequences towards the Mira stage. The Miras of the disk appear to pulsate on a lower-order mode. The slopes of their de-biased PL and PC relations are found to be very different from the ones of the Oxygen Miras of the LMC. This suggests that a significant number of so-called Miras of the LMC are misclassified. This also suggests that the Miras of the LMC do not constitute a homogeneous group, but include a significant proportion of metal-deficient stars, suggesting a relatively smooth star formation history. As a consequence, one may not trivially transpose the LMC period-luminosity relation from one galaxy to the other.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
Background: The RPS4 gene codifies for ribosomal protein S4, a very well-conserved protein present in all kingdoms. In primates, RPS4 is codified by two functional genes located on both sex chromosomes: the RPS4X and RPS4Y genes. In humans, RPS4Y is duplicated and the Y chromosome therefore carries a third functional paralog: RPS4Y2, which presents a testis-specific expression pattern. Results: DNA sequence analysis of the intronic and cDNA regions of RPS4Y genes from species covering the entire primate phylogeny showed that the duplication event leading to the second Y-linked copy occurred after the divergence of New World monkeys, about 35 million years ago. Maximum likelihood analyses of the synonymous and non-synonymous substitutions revealed that positive selection was acting on RPS4Y2 gene in the human lineage, which represents the first evidence of positive selection on a ribosomal protein gene. Putative positive amino acid replacements affected the three domains of the protein: one of these changes is located in the KOW protein domain and affects the unique invariable position of this motif, and might thus have a dramatic effect on the protein function.Conclusion: Here, we shed new light on the evolutionary history of RPS4Y gene family, especially on that of RPS4Y2. The results point that the RPS4Y1 gene might be maintained to compensate gene dosage between sexes, while RPS4Y2 might have acquired a new function, at least in the lineage leading to humans.