902 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Auf einer drei Anbauperioden umfassenden Ground Truth Datenbasis wird der Informationsgehalt multitemporaler ERS-1/-2 Synthetic Aperture Radar (SAR) Daten zur Erfassung der Arteninventare und des Zustandes landwirtschaftlich genutzter Böden und Vegetation in Agrarregionen Bayerns evaluiert.Dazu wird ein für Radardaten angepaßtes, multitemporales, auf landwirtschaftlichen Schlägen beruhendes Klassifizierungsverfahren ausgearbeitet, das auf bildstatistischen Parametern der ERS-Zeitreihen beruht. Als überwachte Klassifizierungsverfahren wird vergleichend der Maximum-Likelihood-Klassifikator und ein Neuronales-Backpropagation-Netz eingesetzt. Die auf Radarbildkanälen beruhenden Gesamtgenauigkeiten variieren zwischen 75 und 85%. Darüber hinaus wird gezeigt, daß die interferometrische Kohärenz und die Kombination mit Bildkanälen optischer Sensoren (Landsat-TM, SPOT-PAN und IRS-1C-PAN) zur Verbesserung der Klassifizierung beitragen. Gleichermaßen können die Klassifizierungsergebnisse durch eine vorgeschaltete Grobsegmentierung des Untersuchungsgebietes in naturräumlich homogene Raumeinheiten verbessert werden. Über die Landnutzungsklassifizierung hinaus, werden weitere bio- und bodenphysikalische Parameter aus den SAR-Daten anhand von Regressionsmodellen abgeleitet. Im Mittelpunkt stehen die Paramter oberflächennahen Bodenfeuchte vegetationsfreier/-armer Flächen sowie die Biomasse landwirtschaftlicher Kulturen. Die Ergebnisse zeigen, daß mit ERS-1/-2 SAR-Daten eine Messung der Bodenfeuchte möglich ist, wenn Informationen zur Bodenrauhigkeit vorliegen. Hinsichtlich der biophysikalischen Parameter sind signifikante Zusammenhänge zwischen der Frisch- bzw. Trockenmasse des Vegetationsbestandes verschiedener Getreide und dem Radarsignal nachweisbar. Die Biomasse-Informationen können zur Korrektur von Wachstumsmodellen genutzt werden und dazu beitragen, die Genauigkeit von Ertragsschätzungen zu steigern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Phylogenie der Westpaläarktischen Langohren (Mammalia, Chiroptera, Plecotus) – eine molekulare Analyse Die Langohren stellen eine Fledermausgattung dar, die fast alle westpaläarktischen Habitate bist zum Polarkreis hin besiedeln und in vielerlei Hinsicht rätselhaft sind. In der Vergangenheit wurden zahlreiche Formen und Varietäten beschrieben. Trotzdem galt für lange Zeit, dass nur zwei Arten in Europa anerkannt wurden. Weitere Arten waren aus Nordafrika, den Kanaren und Asien bekannt, aber auch deren Artstatus wurde vielfach in Frage gestellt. In der vorliegenden Dissertation habe ich mittels molekularer Daten,der partiellen Sequenzierung mitochondrialer Gene (16S rRNA und ND1), sowie der mitochondrialen Kontrollregion, eine molekular Analyse der phylogenetischen Verwandtschaftsverhältnisse innerhalb und zwischen den Linien der westpaläarktischen Langohren durchgeführt. Die besten Substitutionsmodelle wurden berechnet und phylogenetische Bäume mit Hilfe vier verschiedener Methoden konstruiert: dem neighbor joining Verfahren (NJ), dem maximum likelihood Verfahren (ML), dem maximum parsimony Verfahren (MP) und dem Bayesian Verfahren. Sechs Linien der Langohren sind genetisch auf einem Artniveau differenziert: Plecotus auritus, P. austriacus, P. balensis, P. christii, P. sardus, und P. macrobullaris. Im Falle der Arten P. teneriffae, P. kolombatovici und P. begognae ist die alleinige Interpretation der genetischen Daten einzelner mitochondrialer Gene für eine Festlegung des taxonomischen Ranges nicht ausreichend. Ich beschreibe in dieser Dissertation drei neue Taxa: Plecotus sardus, P. kolombatovici gaisleri (=Plecotus teneriffae gaisleri, Benda et al. 2004) and P. macrobullaris alpinus [=Plecotus alpinus, Kiefer & Veith 2002). Morphologische Kennzeichen, insbesondere für die Erkennung im Feld, werden hier dargestellt. Drei der sieben Arten sind polytypisch: P. auritus (eine west- und ein osteuropäische Linie, eine sardische Linie und eine aktuell entdeckte kaukasische Linie, Plecotus kolombatovici (P. k. kolombatovici, P. k. gaisleri und P. k. ssp.) und P. macrobullaris (P. m. macrobullaris und P. m. alpinus). Die Verbreitungsgebiete der meisten Arten werden in dieser Arbeit erstmals ausschließlich anhand genetisch zugeordneter Tiere dargestellt.Die Untersuchung der ökologischen Einnischung der nun anerkannten Formen, insbesondere in Gebieten sympatrischer Verbreitung, bietet ein spannendes und lohnendes Feld für zukünftige Forschungen. Nicht zuletzt muss sich die Entdeckung eines beachtlichen Anteils kryptischer Diversität innerhalb der westpaläarktischen Langohren auch bei der Entwicklung spezieller Artenschutzkonzepte widerspiegeln.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we investigate several phenomenologically important properties of top-quark pair production at hadron colliders. We calculate double differential cross sections in two different kinematical setups, pair invariant-mass (PIM) and single-particle inclusive (1PI) kinematics. In pair invariant-mass kinematics we are able to present results for the double differential cross section with respect to the invariant mass of the top-quark pair and the top-quark scattering angle. Working in the threshold region, where the pair invariant mass M is close to the partonic center-of-mass energy sqrt{hat{s}}, we are able to factorize the partonic cross section into different energy regions. We use renormalization-group (RG) methods to resum large threshold logarithms to next-to-next-to-leading-logarithmic (NNLL) accuracy. On a technical level this is done using effective field theories, such as heavy-quark effective theory (HQET) and soft-collinear effective theory (SCET). The same techniques are applied when working in 1PI kinematics, leading to a calculation of the double differential cross section with respect to transverse-momentum pT and the rapidity of the top quark. We restrict the phase-space such that only soft emission of gluons is possible, and perform a NNLL resummation of threshold logarithms. The obtained analytical expressions enable us to precisely predict several observables, and a substantial part of this thesis is devoted to their detailed phenomenological analysis. Matching our results in the threshold regions to the exact ones at next-to-leading order (NLO) in fixed-order perturbation theory, allows us to make predictions at NLO+NNLL order in RG-improved, and at approximate next-to-next-to-leading order (NNLO) in fixed order perturbation theory. We give numerical results for the invariant mass distribution of the top-quark pair, and for the top-quark transverse-momentum and rapidity spectrum. We predict the total cross section, separately for both kinematics. Using these results, we analyze subleading contributions to the total cross section in 1PI and PIM originating from power corrections to the leading terms in the threshold expansions, and compare them to previous approaches. We later combine our PIM and 1PI results for the total cross section, this way eliminating uncertainties due to these corrections. The combined predictions for the total cross section are presented as a function of the top-quark mass in the pole, the minimal-subtraction (MS), and the 1S mass scheme. In addition, we calculate the forward-backward (FB) asymmetry at the Tevatron in the laboratory, and in the ttbar rest frames as a function of the rapidity and the invariant mass of the top-quark pair at NLO+NNLL. We also give binned results for the asymmetry as a function of the invariant mass and the rapidity difference of the ttbar pair, and compare those to recent measurements. As a last application we calculate the charge asymmetry at the LHC as a function of a lower rapidity cut-off for the top and anti-top quarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In questa tesi viene esposto il modello EU ETS (European Union Emission Trading Scheme) per la riduzione delle emissoni di gas serra, il quale viene formalizzato matematicamente da un sistema di FBSDE (Forward Backward Stochastic Differential Equation). Da questo sistema si ricava un'equazione differenziale non lineare con condizione al tempo finale non continua che viene studiata attraverso la teoria delle soluzioni viscosità. Inoltre il modello viene implementato numericamente per ottenere alcune simulazioni dei processi coinvolti.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Iterative Closest Point (ICP) is a widely exploited method for point registration that is based on binary point-to-point assignments, whereas the Expectation Conditional Maximization (ECM) algorithm tries to solve the problem of point registration within the framework of maximum likelihood with point-to-cluster matching. In this paper, by fulfilling the implementation of both algorithms as well as conducting experiments in a scenario where dozens of model points must be registered with thousands of observation points on a pelvis model, we investigated and compared the performance (e.g. accuracy and robustness) of both ICP and ECM for point registration in cases without noise and with Gaussian white noise. The experiment results reveal that the ECM method is much less sensitive to initialization and is able to achieve more consistent estimations of the transformation parameters than the ICP algorithm, since the latter easily sinks into local minima and leads to quite different registration results with respect to different initializations. Both algorithms can reach the high registration accuracy at the same level, however, the ICP method usually requires an appropriate initialization to converge globally. In the presence of Gaussian white noise, it is observed in experiments that ECM is less efficient but more robust than ICP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES:: Widespread central hypersensitivity and altered conditioned pain modulation (CPM) have been documented in chronic pain conditions. Information on their prognostic values is limited. This study tested the hypothesis that widespread central hypersensitivity (WCH) and altered CPM, assessed during the chronic phase of low back and neck pain, predict poor outcome. METHODS:: A total of 169 consecutive patients with chronic low back or neck pain, referred to the pain clinic during 1 year, were analyzed. Pressure pain tolerance threshold at the second toe and tolerance time during cold pressor test at the hand assessed WCH. CPM was measured by the change in pressure pain tolerance threshold (test stimulus) after cold pressor test (conditioning stimulus). A structured telephone interview was performed 12 to 15 months after testing to record outcome parameters. Linear regression models were used, with average and maximum pain intensity of the last 24 hours at follow-up as endpoints. Multivariable analyses included sex, age, catastrophizing scale, Beck Depression Inventory, pain duration, intake of opioids, and type of pain syndrome. RESULTS:: Statistically significant reductions from baseline to follow-up were observed in pain intensity (P<0.001). No evidence for an association between the measures of WCH or CPM and intensity of chronic pain at follow-up was found. DISCUSSION:: A major predictive value of the measures that we used is unlikely. Future studies adopting other assessment modalities and possibly standardized treatments are needed to further elucidate the prognostic value of WCH and altered CPM in chronic pain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A systematic analysis of New Physics impacts on the rare decays KL→π0ell+ell- is performed. Thanks to their different sensitivities to flavor-changing local effective interactions, these two modes could provide valuable information on the nature of the possible New Physics at play. In particular, a combined measurement of both modes could disentangle scalar/pseudoscalar from vector or axial-vector contributions. For the latter, model-independent bounds are derived. Finally, the KL→π0μ+μ- forward-backward CP-asymmetry is considered, and shown to give interesting complementary information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses estimation of the tumor incidence rate, the death rate given tumor is present and the death rate given tumor is absent using a discrete multistage model. The model was originally proposed by Dewanji and Kalbfleisch (1986) and the maximum likelihood estimate of the tumor incidence rate was obtained using EM algorithm. In this paper, we use a reparametrization to simplify the estimation procedure. The resulting estimates are not always the same as the maximum likelihood estimates but are asymptotically equivalent. In addition, an explicit expression for asymptotic variance and bias of the proposed estimators is also derived. These results can be used to compare efficiency of different sacrifice schemes in carcinogenicity experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Riparian ecology plays an important part in the filtration of sediments from upland agricultural lands. The focus of this work makes use of multispectral high spatial resolution remote sensing imagery (Quickbird by Digital Globe) and geographic information systems (GIS) to characterize significant riparian attributes in the USDA’s experimental watershed, Goodwin Creek, located in northern Mississippi. Significant riparian filter characteristics include the width of the strip, vegetation properties, soil properties, topography, and upland land use practices. The land use and vegetation classes are extracted from the remotely sensed image with a supervised maximum likelihood classification algorithm. Accuracy assessments resulted in an acceptable overall accuracy of 84 percent. In addition to sensing riparian vegetation characteristics, this work addresses the issue of concentrated flow bypassing a riparian filter. Results indicate that Quickbird multispectral remote sensing and GIS data are capable of determining riparian impact on filtering sediment. Quickbird imagery is a practical solution for land managers to monitor the effectiveness of riparian filtration in an agricultural watershed.