902 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood
Resumo:
Spatial heterogeneity, spatial dependence and spatial scale constitute key features of spatial analysis of housing markets. However, the common practice of modelling spatial dependence as being generated by spatial interactions through a known spatial weights matrix is often not satisfactory. While existing estimators of spatial weights matrices are based on repeat sales or panel data, this paper takes this approach to a cross-section setting. Specifically, based on an a priori definition of housing submarkets and the assumption of a multifactor model, we develop maximum likelihood methodology to estimate hedonic models that facilitate understanding of both spatial heterogeneity and spatial interactions. The methodology, based on statistical orthogonal factor analysis, is applied to the urban housing market of Aveiro, Portugal at two different spatial scales.
Resumo:
Monoclonal IgG are commonly observed in various B cell disorders, of which multiple myeloma is the most clinically relevant. In a series of serum samples, we identified by immunofixation 73 monoclonal IgG, including 63 IgG(1), 4 IgG(2), 5 IgG(3), and 1 IgG(4). The light chains were of kappa type in 45 cases, and of lambda type in 28 cases. These monoclonal IgG were further characterized by high resolution two-dimensional polyacrylamide gel electrophoresis (2-DE) in various isoelectric focusing conditions, as well as by 3-DE (2-DE of the proteins extracted from agarose after serum protein agarose electrophoresis). After 2-DE, 38 out of 73 monoclonal gamma chains (52%) were visualized using immobilized pH 3-10 gradients for isoelectric focusing. In 6 cases (8%), gamma chains were only detected using alkaline immobilized pH 6-11 gradients. In 3 cases (4%), 3-DE revealed monoclonal gamma chains hidden by polyclonal gamma chains. Finally, in 26 cases (36%), no monoclonal gamma chains were clearly visualized. Sixty-one monoclonal light chains (84%) were detected using immobilized pH 3-10 gradients, whereas 12 (16%) were not. Monoclonal gamma chains and light chains were highly heterogeneous in terms of pI and M(r). However, a statistically significant correlation (P<0.05) was observed between the position of the monoclonal IgG in agarose gel and the pI of their heavy and light chains (R=0.733, multiple linear regression). Because of the extreme diversity of their heavy and light chains, it appears that a classification of monoclonal IgG based only on their electrophoretic properties is not possible.
Resumo:
BACKGROUND: An objective measurement of surgical procedures outcomes is inherent to professional practices quality control; this especially applies in orthopaedics to joint replacement outcomes. A self-administered questionnaire offers an attractive alternative to surgeon's judgement but is infrequently used in France for these purposes. The British questionnaire, the 12-item Oxford Hip Score (OHS) was selected for this study because of its ease of use. HYPOTHESIS: The objective of this study was to validate the French translation of the self-assessment 12-item Oxford Hip Score and compare its results with those of the reference functional scores: the Harris Hip Score (HHS) and the Postel-Merle d'Aubigné (PMA) score. MATERIALS AND METHODS: Based on a clinical series of 242 patients who were candidates for total hip arthroplasty, the French translation of this questionnaire was validated. Its coherence was also validated by comparing the preoperative data with the data obtained from the two other reference clinical scores. RESULTS: The translation was validated using the forward-backward translation procedure from French to English, with correction of all differences or mistranslations after systematized comparison with the original questionnaire in English. The mean overall OHS score was 43.8 points (range, 22-60 points) with similarly good distribution of the overall value of the three scores compared. The correlation was excellent between the OHS and the HHS, but an identical correlation between the OHS and the PMA was only obtained for the association of the pain and function parameters, after excluding the mobility criterion, relatively over-represented in the PMA score. DISCUSSION AND CONCLUSION: Subjective questionnaires that contribute a personal appreciation of the results of arthroplasty by the patient can easily be applied on a large scale. This study made a translated and validated version of an internationally recognized, reliable self-assessment score available to French orthopaedic surgeons. The results obtained encourage us to use this questionnaire as a complement to the classical evaluation scores and methods.
Resumo:
Gene duplication and neofunctionalization are known to be important processes in the evolution of phenotypic complexity. They account for important evolutionary novelties that confer ecological adaptation, such as the major histocompatibility complex (MHC), a multigene family crucial to the vertebrate immune system. In birds, two MHC class II β (MHCIIβ) exon 3 lineages have been recently characterized, and two hypotheses for the evolutionary history of MHCIIβ lineages were proposed. These lineages could have arisen either by 1) an ancient duplication and subsequent divergence of one paralog or by 2) recent parallel duplications followed by functional convergence. Here, we compiled a data set consisting of 63 MHCIIβ exon 3 sequences from six avian orders to distinguish between these hypotheses and to understand the role of selection in the divergent evolution of the two avian MHCIIβ lineages. Based on phylogenetic reconstructions and simulations, we show that a unique duplication event preceding the major avian radiations gave rise to two ancestral MHCIIβ lineages that were each likely lost once later during avian evolution. Maximum likelihood estimation shows that following the ancestral duplication, positive selection drove a radical shift from basic to acidic amino acid composition of a protein domain facing the α-chain in the MHCII α β-heterodimer. Structural analyses of the MHCII α β-heterodimer highlight that three of these residues are potentially involved in direct interactions with the α-chain, suggesting that the shift following duplication may have been accompanied by coevolution of the interacting α- and β-chains. These results provide new insights into the long-term evolutionary relationships among avian MHC genes and open interesting perspectives for comparative and population genomic studies of avian MHC evolution.
Resumo:
Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
If there are large extra dimensions and the fundamental Planck scale is at the TeV scale, then the question arises of whether ultrahigh energy cosmic rays might probe them. We study the neutrino-nucleon cross section in these models. The elastic forward scattering is analyzed in some detail, hoping to clarify earlier discussions. We also estimate the black hole production rate. We study energy loss from graviton mediated interactions and conclude that they cannot explain the cosmic ray events above the GZK energy limit. However, these interactions could start horizontal air showers with characteristic profile and at a rate higher than in the standard model.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
The objective of this work was to verify the existence of a lethal locus in a eucalyptus hybrid population, and to quantify the segregation distortion in the linkage group 3 of the Eucalyptus genome. A E. grandis x E. urophylla hybrid population, which segregates for rust resistance, was genotyped with 19 microsatellite markers belonging to linkage group 3 of the Eucalyptus genome. To quantify the segregation distortion, maximum likelihood (ML) models, specific to outbreeding populations, were used. These models consider the observed marker genotypes and the lethal locus viability as parameters. The ML solutions were obtained using the expectation‑maximization algorithm. A lethal locus in the linkage group 3 was verified and mapped, with high confidence, between the microssatellites EMBRA 189 e EMBRA 122. This lethal locus causes an intense gametic selection from the male side. Its map position is 25 cM from the locus which controls the rust resistance in this population.
Resumo:
This paper addresses the estimation of the code-phase(pseudorange) and the carrier-phase of the direct signal received from a direct-sequence spread-spectrum satellite transmitter. Thesignal is received by an antenna array in a scenario with interferenceand multipath propagation. These two effects are generallythe limiting error sources in most high-precision positioning applications.A new estimator of the code- and carrier-phases is derivedby using a simplified signal model and the maximum likelihood(ML) principle. The simplified model consists essentially ofgathering all signals, except for the direct one, in a component withunknown spatial correlation. The estimator exploits the knowledgeof the direction-of-arrival of the direct signal and is much simplerthan other estimators derived under more detailed signal models.Moreover, we present an iterative algorithm, that is adequate for apractical implementation and explores an interesting link betweenthe ML estimator and a hybrid beamformer. The mean squarederror and bias of the new estimator are computed for a numberof scenarios and compared with those of other methods. The presentedestimator and the hybrid beamforming outperform the existingtechniques of comparable complexity and attains, in manysituations, the Cramér–Rao lower bound of the problem at hand.
Resumo:
L'évolution de l'environnement économique, des chaînes de valeur et des modèles d'affaires des organisations augmentent l'importance de la coordination, qui peut être définie comme la gestion des interdépendances entre des tâches réalisées par des acteurs différents et concourants à un objectif commun. De nombreux moyens sont mis en oeuvre au sein des organisations pour gérer ces interdépendances. A cet égard, les activités de coordination bénéficient massivement de l'appui des technologies de l'information et de communication (TIC) qui sont désormais disséminées, intégrées et connectées sous de multiples formes tant dans l'environnement privé que professionnel. Dans ce travail, nous avons investigué la question de recherche suivante : comment l'ubiquité et l'interconnec- tivité des TIC modifient-elles les modes de coordination ? A travers quatre études en systèmes d'information conduites selon une méthodologie design science, nous avons traité cette question à deux niveaux : celui de l'alignement stratégique entre les affaires et les systèmes d'information, où la coordination porte sur les interdépendances entre les activités ; et celui de la réalisation des activités, où la coordination porte sur les interdépendances des interactions individuelles. Au niveau stratégique, nous observons que l'ubiquité et l'interconnectivité permettent de transposer des mécanismes de coordination d'un domaine à un autre. En facilitant différentes formes de coprésence et de visibilité, elles augmentent aussi la proximité dans les situations de coordination asynchrone ou distante. Au niveau des activités, les TIC présentent un très fort potentiel de participation et de proximité pour les acteurs. De telles technologies leur donnent la possibilité d'établir les responsabilités, d'améliorer leur compréhension commune et de prévoir le déroulement et l'intégration des tâches. La contribution principale qui émerge de ces quatre études est que les praticiens peuvent utiliser l'ubiquité et l'interconnectivité des TIC pour permettre aux individus de communi-quer et d'ajuster leurs actions pour définir, atteindre et redéfinir les objectifs du travail commun. -- The evolution of the economic environment and of the value chains and business models of organizations increases the importance of coordination, which can be defined as the management of interdependences between tasks carried out by different actors and con-tributing to a common goal. In organizations, a considerable number of means are put into action in order to manage such interdependencies. In this regard, information and communication technologies (ICT), whose various forms are nowadays disseminated, integrated and connected in both private and professional environments, offer important support to coordination activities. In this work, we have investigated the following research question: how do the ubiquity and the interconnectivity of ICT modify coordination mechanisms? Throughout four information systems studies conducted according to a design science methodology, we have looked into this question at two different levels: the one of strategic alignment between business and information systems strategy, where coordination is about interdependencies between activities; and the one of tasks, where coordination is about interdependencies between individual interactions. At the strategic level, we observe that ubiquity and interconnectivity allow for transposing coordination mechanisms from one field to another. By facilitating various forms of copresence and visibility, they also increase proximity in asynchronous or distant coordination situations. At the tasks level, ICTs offer the actors a very high potential for participation and proximity. Such technologies make it possible to establish accountability, improve common understanding and anticipate the unfolding and integration of tasks. The main contribution emerging from these four studies is that practitioners can use the ubiquity and interconnectivity of ICT in order to allow individuals to communicate and adjust their actions to define, reach and redefine the goals of common work.
Resumo:
The present thesis investigated the importance of semantics in generating inferences during discourse processing. Three aspects of semantics, gender stereotypes, implicit causality information and proto-role properties, were used to investigate whether semantics is activated elaboratively during discourse comprehension and what its relative importance is in backward inferencing compared to discourse/structural cues. Visual world eye-tracking studies revealed that semantics plays an important role in both backward and forward inferencing: Gender stereotypes and implicit causality information is activated elaboratively during online discourse comprehension. Moreover, gender stereotypes, implicit causality and proto-role properties of verbs are all used in backward inferencing. Importantly, the studies demonstrated that semantic cues are weighed against discourse/structural cues. When the structural cues consist of a combination of cues that have been independently shown to be important in backward inferencing, semantic effects may be masked, whereas when the structural cues consist of a combination of fewer prominent cues, semantics can have an earlier effect than structural factors in pronoun resolution. In addition, the type of inference matters, too: During anaphoric inferencing semantics has a prominent role, while discourse/structural salience attains more prominence during non-anaphoric inferencing. Finally, semantics exhibits a strong role in inviting new inferences to revise earlier made inferences even in the case the additional inference is not needed to establish coherence in discourse. The findings are generally in line with the Mental Model approaches. Two extended model versions are presented that incorporate the current findings into the earlier literature. These models allow both forward and backward inferencing to occur at any given moment during the course of processing; they also allow semantic and discourse/structural cues to contribute to both of these processes. However, while the Mental Model 1 does not assume interactions between semantic and discourse/structural factors in forward inferencing, the Mental Model 2 does assume such a link.
Resumo:
Kalman filter is a recursive mathematical power tool that plays an increasingly vital role in innumerable fields of study. The filter has been put to service in a multitude of studies involving both time series modelling and financial time series modelling. Modelling time series data in Computational Market Dynamics (CMD) can be accomplished using the Jablonska-Capasso-Morale (JCM) model. Maximum likelihood approach has always been utilised to estimate the parameters of the JCM model. The purpose of this study is to discover if the Kalman filter can be effectively utilized in CMD. Ensemble Kalman filter (EnKF), with 50 ensemble members, applied to US sugar prices spanning the period of January, 1960 to February, 2012 was employed for this work. The real data and Kalman filter trajectories showed no significant discrepancies, hence indicating satisfactory performance of the technique. Since only US sugar prices were utilized, it would be interesting to discover the nature of results if other data sets are employed.
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.