27 resultados para verification
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
PURPOSE: We investigated the influence of beam modulation on treatment planning by comparing four available stereotactic radiosurgery (SRS) modalities: Gamma-Knife-Perfexion, Novalis-Tx Dynamic-Conformal-Arc (DCA) and Dynamic-Multileaf-Collimation-Intensity-Modulated-radiotherapy (DMLC-IMRT), and Cyberknife. MATERIAL AND METHODS: Patients with arteriovenous malformation (n = 10) or acoustic neuromas (n = 5) were planned with different treatment modalities. Paddick conformity index (CI), dose heterogeneity (DH), gradient index (GI) and beam-on time were used as dosimetric indices. RESULTS: Gamma-Knife-Perfexion can achieve high degree of conformity (CI = 0.77 ± 0.04) with limited low-doses (GI = 2.59 ± 0.10) surrounding the inhomogeneous dose distribution (D(H) = 0.84 ± 0.05) at the cost of treatment time (68.1 min ± 27.5). Novalis-Tx-DCA improved this inhomogeneity (D(H) = 0.30 ± 0.03) and treatment time (16.8 min ± 2.2) at the cost of conformity (CI = 0.66 ± 0.04) and Novalis-TX-DMLC-IMRT improved the DCA CI (CI = 0.68 ± 0.04) and inhomogeneity (D(H) = 0.18 ± 0.05) at the cost of low-doses (GI = 3.94 ± 0.92) and treatment time (21.7 min ± 3.4) (p<0.01). Cyberknife achieved comparable conformity (CI = 0.77 ± 0.06) at the cost of low-doses (GI = 3.48 ± 0.47) surrounding the homogeneous (D(H) = 0.22 ± 0.02) dose distribution and treatment time (28.4min±8.1) (p<0.01). CONCLUSIONS: Gamma-Knife-Perfexion will comply with all SRS constraints (high conformity while minimizing low-dose spread). Multiple focal entries (Gamma-Knife-Perfexion and Cyberknife) will achieve better conformity than High-Definition-MLC of Novalis-Tx at the cost of treatment time. Non-isocentric beams (Cyberknife) or IMRT-beams (Novalis-Tx-DMLC-IMRT) will spread more low-dose than multiple isocenters (Gamma-Knife-Perfexion) or dynamic arcs (Novalis-Tx-DCA). Inverse planning and modulated fluences (Novalis-Tx-DMLC-IMRT and CyberKnife) will deliver the most homogeneous treatment. Furthermore, Linac-based systems (Novalis and Cyberknife) can perform image verification at the time of treatment delivery.
Resumo:
Because of low incidence, mixed study populations and paucity of clinical and histological data, the management of adult brainstem gliomas (BSGs) remains non-standardized. We here describe characteristics, treatment and outcome of patients with exclusively histologically confirmed adult BSGs. A retrospective chart review of adults (age >18 years) was conducted. BSG was defined as a glial tumor located in the midbrain, pons or medulla. Characteristics, management and outcome were analyzed. Twenty one patients (17 males; median age 41 years) were diagnosed between 2004 and 2012 by biopsy (n = 15), partial (n = 4) or complete resection (n = 2). Diagnoses were glioblastoma (WHO grade IV, n = 6), anaplastic astrocytoma (WHO grade III, n = 7), diffuse astrocytoma (WHO grade II, n = 6) and pilocytic astrocytoma (WHO grade I, n = 2). Diffuse gliomas were mainly located in the pons and frequently showed MRI contrast enhancement. Endophytic growth was common (16 vs. 5). Postoperative therapy in low-grade (WHO grade I/II) and high-grade gliomas (WHO grade III/IV) consisted of radiotherapy alone (three in each group), radiochemotherapy (2 vs. 6), chemotherapy alone (0 vs. 2) or no postoperative therapy (3 vs. 1). Median PFS (24.1 vs. 5.8 months; log-rank, p = 0.009) and mOS (30.5 vs. 11.5 months; log-rank, p = 0.028) was significantly better in WHO grade II than in WHO grade III/IV tumors. Second-line therapy considerably varied. Histologically verification of adult BSGs is feasible and has an impact on postoperative treatment. Low-grade gliomas can simple be followed or treated with radiotherapy alone. Radiochemotherapy with temozolomide can safely be prescribed for high-grade gliomas without additional CNS toxicities.
Resumo:
Le pentecôtisme a fait du miracle le coeur de sa théologie et l'élément central de ses activités d'évangélisation. Le catholicisme, par contre, a toujours voulu contrôler l'ensemble des déclarations de manifestations divines. Apparitions et guérisons miraculeuses ont donc systématiquement, et de plus en plus, été soumises à de lentes et rigoureuses procédures d'authentification. Les pentecôtistes voient Dieu comme un être extérieur qui surgit sur la terre pour chasser le mal qui l'envahit. Tous les convertis ont donc droit à la libération et personne ne doit accepter sagement la souffrance. Or, les pèlerins catholiques que nous avons étudiés ne partagent pas ces convictions pentecôtistes. Dieu agit de l'intérieur, non pas en les délivrant, mais en les soutenant dans leurs épreuves quotidiennes. Rare et peu recherchée, la guérison physique cède la place à la guérison spirituelle, accessible à tous. Il nous semble que ces deux types de représentations placent les fidèles dans des dispositions d'esprit très divergentes suscitant, dans un cas ou dans l'autre, des espoirs adaptés à la capacité du groupe à produire des miracles. Pentecostalism placed miracles at the centre of its theology as a key element of its evangelization activities. Catholicism, on the other hand, has always tried to control all declarations of divine demonstrations. Miraculous appearances and recoveries have been more and more systematically subjected to slow and rigorous procedures of verification. The Pentecostals see God as an external force which manifests itself on earth to drive out the evil which invades it. All believers have the right to be free from evil, and nobody should have to accept pain meekly. But the Catholic pilgrims we studied do not share these Pentecostal convictions. God acts from inside, not by delivering them but by supporting them in their daily tests. Physical recovery is rare and not very sought after so it takes second place to spiritual recovery which is accessible to everyone. It seems to us that these two types of representation place believers in very divergent frames of mind giving rise, in one group or the other, to hopes that correspond to the group's capacity to produce miracles.
Resumo:
3D dose reconstruction is a verification of the delivered absorbed dose. Our aim was to describe and evaluate a 3D dose reconstruction method applied to phantoms in the context of narrow beams. A solid water phantom and a phantom containing a bone-equivalent material were irradiated on a 6 MV linac. The transmitted dose was measured by using one array of a 2D ion chamber detector. The dose reconstruction was obtained by an iterative algorithm. A phantom set-up error and organ interfraction motion were simulated to test the algorithm sensitivity. In all configurations convergence was obtained within three iterations. A local reconstructed dose agreement of at least 3% / 3mm with respect to the planned dose was obtained, except in a few points of the penumbra. The reconstructed primary fluences were consistent with the planned ones, which validates the whole reconstruction process. The results validate our method in a simple geometry and for narrow beams. The method is sensitive to a set-up error of a heterogeneous phantom and interfraction heterogeneous organ motion.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.
Resumo:
OBJECTIVES: This study aimed to investigate post-mortem magnetic resonance imaging (pmMRI) for the assessment of myocardial infarction and hypointensities on post-mortem T2-weighted images as a possible method for visualizing the myocardial origin of arrhythmic sudden cardiac death. BACKGROUND: Sudden cardiac death has challenged clinical and forensic pathologists for decades because verification on post-mortem autopsy is not possible. pmMRI as an autopsy-supporting examination technique has been shown to visualize different stages of myocardial infarction. METHODS: In 136 human forensic corpses, a post-mortem cardiac MR examination was carried out prior to forensic autopsy. Short-axis and horizontal long-axis images were acquired in situ on a 3-T system. RESULTS: In 76 cases, myocardial findings could be documented and correlated to the autopsy findings. Within these 76 study cases, a total of 124 myocardial lesions were detected on pmMRI (chronic: 25; subacute: 16; acute: 30; and peracute: 53). Chronic, subacute, and acute infarction cases correlated excellently to the myocardial findings on autopsy. Peracute infarctions (age range: minutes to approximately 1 h) were not visible on macroscopic autopsy or histological examination. Peracute infarction areas detected on pmMRI could be verified in targeted histological investigations in 62.3% of cases and could be related to a matching coronary finding in 84.9%. A total of 15.1% of peracute lesions on pmMRI lacked a matching coronary finding but presented with severe myocardial hypertrophy or cocaine intoxication facilitating a cardiac death without verifiable coronary stenosis. CONCLUSIONS: 3-T pmMRI visualizes chronic, subacute, and acute myocardial infarction in situ. In peracute infarction as a possible cause of sudden cardiac death, it demonstrates affected myocardial areas not visible on autopsy. pmMRI should be considered as a feasible post-mortem investigation technique for the deceased patient if no consent for a clinical autopsy is obtained.
Resumo:
Survival statistics from the incident cases of the Vaud Cancer Registry over the period 1974-1980 were computed on the basis of an active follow-up based on verification of vital status as to December 31, 1984. Product-moment crude and relative 5 to 10 year rates are presented in separate strata of sex, age and area of residence (urban or rural). Most of the rates are comparable with those in other published series from North America or Europe, but survival from gastric cancer (24% 5-year relative rates) tended to be higher, and that from bladder cancer (about 30%) lower than in most other datasets. No significant difference in survival emerged according to residence in urban Lausanne vs surrounding (rural) areas. Interesting indications according to subsite (higher survival for the pyloric region vs the gastric fundus, but absence of substantial differences for various colon subsites), histology (higher rates for squamous carcinomas of the lung, seminomas of the testis or chronic lymphatic leukemias as compared with other histotypes), or site of origin (higher survival for lower limb melanomas), require further quantitative assessment from other population-based series. A Cox proportional hazard model applied to melanomatous skin cancers showed an independent favorable effect on long-term prognosis of female gender and adverse implications for advanced age, stage at diagnosis and tumor site other than lower limb.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Dosimetric comparison of different treatment modalities for stereotactic radiosurgery of meningioma.
Resumo:
BACKGROUND: The objective of this study was to compare the three most prominent systems for stereotactic radiosurgery in terms of dosimetric characteristics: the Cyberknife system, the Gamma Knife Perfexion and the Novalis system. METHODS: Ten patients treated for recurrent grade I meningioma after surgery using the Cyberknife system were identified; the Cyberknife contours were exported and comparative treatment plans were generated for the Novalis system and Gamma Knife Perfexion. Dosimetric values were compared with respect to coverage, conformity index (CI), gradient index (GI) and beam-on time (BOT). RESULTS: All three systems showed comparable results in terms of coverage. The Gamma Knife and the Cyberknife system showed significantly higher levels of conformity than the Novalis system (Cyberknife vs Novalis, p = 0.002; Gamma Knife vs Novalis, p = 0.002). The Gamma Knife showed significantly steeper gradients compared with the Novalis and the Cyberknife system (Gamma Knife vs Novalis, p = 0.014; Gamma Knife vs Cyberknife, p = 0.002) and significantly longer beam-on times than the other two systems (BOT = 66 ± 21.3 min, Gamma Knife vs Novalis, p = 0.002; Gamma Knife vs Cyberknife, p = 0.002). CONCLUSIONS: The multiple focal entry systems (Gamma Knife and Cyberknife) achieve higher conformity than the Novalis system. The Gamma Knife delivers the steepest dose gradient of all examined systems. However, the Gamma Knife is known to require long beam-on times, and despite worse dose gradients, LINAC-based systems (Novalis and Cyberknife) offer image verification at the time of treatment delivery.
Resumo:
After incidentally learning about a hidden regularity, participants can either continue to solve the task as instructed or, alternatively, apply a shortcut. Past research suggests that the amount of conflict implied by adopting a shortcut seems to bias the decision for vs. against continuing instruction-coherent task processing. We explored whether this decision might transfer from one incidental learning task to the next. Theories that conceptualize strategy change in incidental learning as a learning-plus-decision phenomenon suggest that high demands to adhere to instruction-coherent task processing in Task 1 will impede shortcut usage in Task 2, whereas low control demands will foster it. We sequentially applied two established incidental learning tasks differing in stimuli, responses and hidden regularity (the alphabet verification task followed by the serial reaction task, SRT). While some participants experienced a complete redundancy in the task material of the alphabet verification task (low demands to adhere to instructions), for others the redundancy was only partial. Thus, shortcut application would have led to errors (high demands to follow instructions). The low control demand condition showed the strongest usage of the fixed and repeating sequence of responses in the SRT. The transfer results are in line with the learning-plus-decision view of strategy change in incidental learning, rather than with resource theories of self-control.