963 resultados para laser communications satellite-based laser submerged platform Monte Carlo simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Drei-Spektrometer-Anlage am Mainzer Institut für Kernphysik wurde um ein zusätzliches Spektrometer ergänzt, welches sich durch seine kurze Baulänge auszeichnet und deshalb Short-Orbit-Spektrometer (SOS) genannt wird. Beim nominellen Abstand des SOS vom Target (66 cm) legen die nachzuweisenden Teilchen zwischen Reaktionsort und Detektor eine mittlere Bahnlänge von 165 cm zurück. Für die schwellennahe Pionproduktion erhöht sich dadurch im Vergleich zu den großen Spektrometern die Überlebenswahrscheinlichkeit geladener Pionen mit Impuls 100 MeV/c von 15% auf 73%. Demzufolge verringert sich der systematische Fehler ("Myon-Kontamination"), etwa bei der geplanten Messung der schwachen Formfaktoren G_A(Q²) und G_P(Q²), signifikant. Den Schwerpunkt der vorliegenden Arbeit bildet die Driftkammer des SOS. Ihre niedrige Massenbelegung (0,03% X_0) zur Reduzierung der Kleinwinkelstreuung ist auf den Nachweis niederenergetischer Pionen hin optimiert. Aufgrund der neuartigen Geometrie des Detektors musste eine eigene Software zur Spurrekonstruktion, Effizienzbestimmung etc. entwickelt werden. Eine komfortable Möglichkeit zur Eichung der Driftweg-Driftzeit-Relation, die durch kubische Splines dargestellt wird, wurde implementiert. Das Auflösungsvermögen des Spurdetektors liegt in der dispersiven Ebene bei 76 µm für die Orts- und 0,23° für die Winkelkoordinate (wahrscheinlichster Fehler) sowie entsprechend in der nicht-dispersiven Ebene bei 110 µm bzw. 0,29°. Zur Rückrechnung der Detektorkoordinaten auf den Reaktionsort wurde die inverse Transfermatrix des Spektrometers bestimmt. Hierzu wurden an Protonen im ¹²C-Kern quasielastisch gestreute Elektronen verwendet, deren Startwinkel durch einen Lochkollimator definiert wurden. Daraus ergeben sich experimentelle Werte für die mittlere Winkelauflösung am Target von sigma_phi = 1,3 mrad bzw. sigma_theta = 10,6 mrad. Da die Impulseichung des SOS nur mittels quasielastischer Streuung (Zweiarmexperiment) durchgeführt werden kann, muss man den Beitrag des Protonarms zur Breite des Piks der fehlenden Masse in einer Monte-Carlo-Simulation abschätzen und herausfalten. Zunächst lässt sich nur abschätzen, dass die Impulsauflösung sicher besser als 1% ist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we consider three different models for strongly correlated electrons, namely a multi-band Hubbard model as well as the spinless Falicov-Kimball model, both with a semi-elliptical density of states in the limit of infinite dimensions d, and the attractive Hubbard model on a square lattice in d=2. In the first part, we study a two-band Hubbard model with unequal bandwidths and anisotropic Hund's rule coupling (J_z-model) in the limit of infinite dimensions within the dynamical mean-field theory (DMFT). Here, the DMFT impurity problem is solved with the use of quantum Monte Carlo (QMC) simulations. Our main result is that the J_z-model describes the occurrence of an orbital-selective Mott transition (OSMT), in contrast to earlier findings. We investigate the model with a high-precision DMFT algorithm, which was developed as part of this thesis and which supplements QMC with a high-frequency expansion of the self-energy. The main advantage of this scheme is the extraordinary accuracy of the numerical solutions, which can be obtained already with moderate computational effort, so that studies of multi-orbital systems within the DMFT+QMC are strongly improved. We also found that a suitably defined Falicov-Kimball (FK) model exhibits an OSMT, revealing the close connection of the Falicov-Kimball physics to the J_z-model in the OSM phase. In the second part of this thesis we study the attractive Hubbard model in two spatial dimensions within second-order self-consistent perturbation theory. This model is considered on a square lattice at finite doping and at low temperatures. Our main result is that the predictions of first-order perturbation theory (Hartree-Fock approximation) are renormalized by a factor of the order of unity even at arbitrarily weak interaction (U->0). The renormalization factor q can be evaluated as a function of the filling n for 00, the q-factor vanishes, signaling the divergence of self-consistent perturbation theory in this limit. Thus we present the first asymptotically exact results at weak-coupling for the negative-U Hubbard model in d=2 at finite doping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with the adsorption and detachment of polymers at planar, rigid surfaces. We have carried out a systematic investigation of adsorption of polymers using analytical techniques as well as Monte Carlo simulations with a coarse grained off-lattice bead spring model. The investigation was carried out in three stages. In the first stage the adsorption of a single multiblock AB copolymer on a solid surface was investigated by means of simulations and scaling analysis. It was shown that the problem could be mapped onto an effective homopolymer problem. Our main result was the phase diagram of regular multiblock copolymers which shows an increase in the critical adsorption potential of the substrate with decreasing size of blocks. We also considered the adsorption of random copolymers which was found to be well described within the annealed disorder approximation. In the next phase, we studied the adsorption kinetics of a single polymer on a flat, structureless surface in the regime of strong physisorption. The idea of a ’stem-flower’ polymer conformation and the mechanism of ’zipping’ during the adsorption process were used to derive a Fokker-Planck equation with reflecting boundary conditions for the time dependent probability distribution function (PDF) of the number of adsorbed monomers. The numerical solution of the time-dependent PDF obtained from a discrete set of coupled differential equations were shown to be in perfect agreement with Monte Carlo simulation results. Finally we studied force induced desorption of a polymer chain adsorbed on an attractive surface. We approached the problem within the framework of two different statistical ensembles; (i) by keeping the pulling force fixed while measuring the position of the polymer chain end, and (ii) by measuring the force necessary to keep the chain end at fixed distance above the adsorbing plane. In the first case we treated the problem within the framework of the Grand Canonical Ensemble approach and derived analytic expressions for the various conformational building blocks, characterizing the structure of an adsorbed linear polymer chain, subject to pulling force of fixed strength. The main result was the phase diagram of a polymer chain under pulling. We demonstrated a novel first order phase transformation which is dichotomic i.e. phase coexistence is not possible. In the second case, we carried out our study in the “fixed height” statistical ensemble where one measures the fluctuating force, exerted by the chain on the last monomer when a chain end is kept fixed at height h over the solid plane at different adsorption strength ε. The phase diagram in the h − ε plane was calculated both analytically and by Monte Carlo simulations. We demonstrated that in the vicinity of the polymer desorption transition a number of properties like fluctuations and probability distribution of various quantities behave differently, if h rather than the force, f, is used as an independent control parameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Marking the final explosive burning stage of massive stars, supernovae are onernthe of most energetic celestial events. Apart from their enormous optical brightnessrnthey are also known to be associated with strong emission of MeV neutrinos—up tornnow the only proven source of extrasolar neutrinos.rnAlthough being designed for the detection of high energy neutrinos, the recentlyrncompleted IceCube neutrino telescope in the antarctic ice will have the highestrnsensitivity of all current experiments to measure the shape of the neutrino lightrncurve, which is in the MeV range. This measurement is crucial for the understandingrnof supernova dynamics.rnIn this thesis, the development of a Monte Carlo simulation for a future low energyrnextension of IceCube, called PINGU, is described that investigates the response ofrnPINGU to a supernova. Using this simulation, various detector configurations arernanalysed and optimised for supernova detection. The prospects of extracting notrnonly the total light curve, but also the direction of the supernova and the meanrnneutrino energy from the data are discussed. Finally the performance of PINGU isrncompared to the current capabilities of IceCube.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurden Computersimulationen von Keimbildungs- und Kris\-tallisationsprozessen in rnkolloidalen Systemen durchgef\"uhrt. rnEine Kombination von Monte-Carlo-Simulationsmethoden und der Forward-Flux-Sampling-Technik wurde rnimplementiert, um die homogene und heterogene Nukleation von Kristallen monodisperser Hart\-kugeln zu untersuchen. rnIm m\"a\ss{ig} unterk\"uhlten Bulk-Hartkugelsystem sagen wir die homogenen Nukleationsraten voraus und rnvergleichen die Resultate mit anderen theoretischen Ergebnissen und experimentellen Daten. rnWeiterhin analysieren wir die kristallinen Cluster in den Keimbildungs- und Wachstumszonen, rnwobei sich herausstellt, dass kristalline Cluster sich in unterschiedlichen Formen im System bilden. rnKleine Cluster sind eher l\"anglich in eine beliebige Richtung ausgedehnt, w\"ahrend gr\"o\ss{ere} rnCluster kompakter und von ellipsoidaler Gestalt sind. rn rnIm n\"achsten Teil untersuchen wir die heterogene Keimbildung an strukturierten bcc (100)-W\"anden. rnDie 2d-Analyse der kristallinen Schichten an der Wand zeigt, dass die Struktur der rnWand eine entscheidende Rolle in der Kristallisation von Hartkugelkolloiden spielt. rnWir sagen zudem die heterogenen Kristallbildungsraten bei verschiedenen \"Ubers\"attigungsgraden voraus. rnDurch Analyse der gr\"o\ss{ten} Cluster an der Wand sch\"atzen wir zus\"atzlich den Kontaktwinkel rnzwischen Kristallcluster und Wand ab. rnEs stellt sich heraus, dass wir in solchen Systemen weit von der Benetzungsregion rnentfernt sind und der Kristallisationsprozess durch heterogene Nukleation stattfindet. rn rnIm letzten Teil der Arbeit betrachten wir die Kristallisation von Lennard-Jones-Kolloidsystemen rnzwischen zwei ebenen W\"anden. rnUm die Erstarrungsprozesse f\"ur ein solches System zu untersuchen, haben wir eine Analyse des rnOrdnungsparameters f\"ur die Bindung-Ausrichtung in den Schichten durchgef\"urt. rnDie Ergebnisse zeigen, dass innerhalb einer Schicht keine hexatische Ordnung besteht, rnwelche auf einen Kosterlitz-Thouless-Schmelzvorgang hinweisen w\"urde. rnDie Hysterese in den Erhitzungs-Gefrier\-kurven zeigt dar\"uber hinaus, dass der Kristallisationsprozess rneinen aktivierten Prozess darstellt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The available results of deep imaging searches for planetary companions around nearby stars provide us useful constraints on the frequencies of giant planets in very wide orbits. Here we present some preliminary results of the Monte Carlo simulation which compare the published detection limits with the generated planetary masses and orbital parameters. This allow us to consider the impications of the null detection, which comes from the direct imaging techniques, on the distributions of mass and semimajor axis derived from the results of the other search techniques and also to check the agreement of the observations with the available planetary formation models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study was conducted to estimate the direct losses due to Neospora caninum in Swiss dairy cattle and to assess the costs and benefits of different potential control strategies. A Monte Carlo simulation spreadsheet module was developed to estimate the direct costs caused by N. caninum, with and without control strategies, and to estimate the costs of these control strategies in a financial analysis. The control strategies considered were "testing and culling of seropositive female cattle", "discontinued breeding with offspring from seropositive cows", "chemotherapeutical treatment of female offspring" and "vaccination of all female cattle". Each parameter in the module that was considered to be uncertain, was described using probability distributions. The simulations were run with 20,000 iterations over a time period of 25 years. The median annual losses due to N. caninum in the Swiss dairy cow population were estimated to be euro 9.7 million euros. All control strategies that required yearly serological testing of all cattle in the population produced high costs and thus were not financially profitable. Among the other control strategies, two showed benefit-cost ratios (BCR) >1 and positive net present values (NPV): "Discontinued breeding with offspring from seropositive cows" (BCR=1.29, NPV=25 million euros ) and "chemotherapeutical treatment of all female offspring" (BCR=2.95, NPV=59 million euros). In economic terms, the best control strategy currently available would therefore be "discontinued breeding with offspring from seropositive cows".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many clinical trials to evaluate treatment efficacy, it is believed that there may exist latent treatment effectiveness lag times after which medical procedure or chemical compound would be in full effect. In this article, semiparametric regression models are proposed and studied to estimate the treatment effect accounting for such latent lag times. The new models take advantage of the invariance property of the additive hazards model in marginalizing over random effects, so parameters in the models are easy to be estimated and interpreted, while the flexibility without specifying baseline hazard function is kept. Monte Carlo simulation studies demonstrate the appropriateness of the proposed semiparametric estimation procedure. Data collected in the actual randomized clinical trial, which evaluates the effectiveness of biodegradable carmustine polymers for treatment of recurrent brain tumors, are analyzed.