226 resultados para Small-error approximation
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
The weak selection approximation of population genetics has made possible the analysis of social evolution under a considerable variety of biological scenarios. Despite its extensive usage, the accuracy of weak selection in predicting the emergence of altruism under limited dispersal when selection intensity increases remains unclear. Here, we derive the condition for the spread of an altruistic mutant in the infinite island model of dispersal under a Moran reproductive process and arbitrary strength of selection. The simplicity of the model allows us to compare weak and strong selection regimes analytically. Our results demonstrate that the weak selection approximation is robust to moderate increases in selection intensity and therefore provides a good approximation to understand the invasion of altruism in spatially structured population. In particular, we find that the weak selection approximation is excellent even if selection is very strong, when either migration is much stronger than selection or when patches are large. Importantly, we emphasize that the weak selection approximation provides the ideal condition for the invasion of altruism, and increasing selection intensity will impede the emergence of altruism. We discuss that this should also hold for more complicated life cycles and for culturally transmitted altruism. Using the weak selection approximation is therefore unlikely to miss out on any demographic scenario that lead to the evolution of altruism under limited dispersal.
Resumo:
PURPOSE: The goal of this study was to compare magnetic resonance enterography (MRE) and video capsule endoscopy (VCE) in suspected small bowel disease. MATERIALS AND METHODS: Nineteen patients with suspected small bowel disease participated in a prospective clinical comparison of MRE versus VCE. Both methods were evaluated separately and in conjunction with respect to a combined diagnostic endpoint based on clinical, laboratory, surgical, and histopathological findings. The Fisher's exact and j tests were used in comparing MRE and VCE. RESULTS: Small bowel pathologies were found in 15 out of 19 patients: Crohn's disease (n= 5), lymphoma (n= 4), lymphangioma (n= 1), adenocarcinoma (n= 1), postradiation enteropathy (n= 1), NSAID-induced enteropathy (n =1), angiodysplasia (n= 1), and small bowel adhesions (n= 1). VCE and MRE separately and in conjunction showed sensitivities of 92.9, 71.4, and 100% and specificities of 80, 60, and 80% (kappa= 0.73 vs. kappa = 0.29; P= 0.31/kappa = 0.85), respectively. In four patients, VCE depicted mucosal pathologies missed by MRE. MRE revealed 19 extraenteric findings in 11 patients as well as small bowel adhesions not detected on VCE (n= 1). CONCLUSION: VCE can readily depict and characterize subtle mucosal lesions missed at MRE, whereas MRE yields additional mural, perienteric, and extraenteric information. Thus, VCE and MRE appear to be complementary methods which, when used in conjunction, may better characterize suspected small bowel disease.
Resumo:
Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.
Resumo:
Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.
Resumo:
The development of orally active small molecule inhibitors of the epidermal growth factor receptor (EGFR) has led to new treatment options for non-small cell lung cancer (NSCLC). Patients with activating mutations of the EGFR gene show sensitivity to, and clinical benefit from, treatment with EGFR tyrosine kinase inhibitors (EGFR-TKls). First generation reversible ATP-competitive EGFR-TKls, gefitinib and erlotinib, are effective as first, second-line or maintenance therapy. Despite initial benefit, most patients develop resistance within a year, 50-60% of cases being related to the appearance of a T790M gatekeeper mutation. Newer, irreversible EGFR-TKls - afatinib and dacomitinib - covalently bind to and inhibit multiple receptors in the ErbB family (EGFR, HER2 and HER4). These agents have been mainly evaluated for first-line treatment but also in the setting of acquired resistance to first-generation EGFR-TKls. Afatinib is the first ErbB family blocker approved for patients with NSCLC with activating EGFR mutations; dacomitinib is in late stage clinical development. Mutant-selective EGFR inhibitors (AZD9291, CO-1686, HM61713) that specifically target the T790M resistance mutation are in early development. The EGFR-TKIs differ in their spectrum of target kinases, reversibility of binding to EGFR receptor, pharmacokinetics and potential for drug-drug interactions, as discussed in this review. For the clinician, these differences are relevant in the setting of polymedicated patients with NSCLC, as well as from the perspective of innovative anticancer drug combination strategies.
Resumo:
OBJECTIVES: This study sought to establish an accurate and reproducible T(2)-mapping cardiac magnetic resonance (CMR) methodology at 3 T and to evaluate it in healthy volunteers and patients with myocardial infarct. BACKGROUND: Myocardial edema affects the T(2) relaxation time on CMR. Therefore, T(2)-mapping has been established to characterize edema at 1.5 T. A 3 T implementation designed for longitudinal studies and aimed at guiding and monitoring therapy remains to be implemented, thoroughly characterized, and evaluated in vivo. METHODS: A free-breathing navigator-gated radial CMR pulse sequence with an adiabatic T(2) preparation module and an empirical fitting equation for T(2) quantification was optimized using numerical simulations and was validated at 3 T in a phantom study. Its reproducibility for myocardial T(2) quantification was then ascertained in healthy volunteers and improved using an external reference phantom with known T(2). In a small cohort of patients with established myocardial infarction, the local T(2) value and extent of the edematous region were determined and compared with conventional T(2)-weighted CMR and x-ray coronary angiography, where available. RESULTS: The numerical simulations and phantom study demonstrated that the empirical fitting equation is significantly more accurate for T(2) quantification than that for the more conventional exponential decay. The volunteer study consistently demonstrated a reproducibility error as low as 2 ± 1% using the external reference phantom and an average myocardial T(2) of 38.5 ± 4.5 ms. Intraobserver and interobserver variability in the volunteers were -0.04 ± 0.89 ms (p = 0.86) and -0.23 ± 0.91 ms (p = 0.87), respectively. In the infarction patients, the T(2) in edema was 62.4 ± 9.2 ms and was consistent with the x-ray angiographic findings. Simultaneously, the extent of the edematous region by T(2)-mapping correlated well with that from the T(2)-weighted images (r = 0.91). CONCLUSIONS: The new, well-characterized 3 T methodology enables robust and accurate cardiac T(2)-mapping at 3 T with high spatial resolution, while the addition of a reference phantom improves reproducibility. This technique may be well suited for longitudinal studies in patients with suspected or established heart disease.
Resumo:
The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).
Resumo:
Peripheral T-cell lymphoma (PTCL) is a rare, heterogeneous type of non-Hodgkin lymphoma (NHL) that, in general, is associated with a poor clinical outcome. Therefore, a current major challenge is the discovery of new prognostic tools for this disease. In the present study, a cohort of 122 patients with PTCL was collected from a multicentric T-cell lymphoma consortium (TENOMIC). We analyzed the expression of 80 small nucleolar RNAs (snoRNAs) using high-throughput quantitative PCR. We demonstrate that snoRNA expression analysis may be useful in both the diagnosis of some subtypes of PTCL and the prognostication of both PTCL-not otherwise specified (PTCL-NOS; n = 26) and angio-immunoblastic T-cell lymphoma (AITL; n = 46) patients treated with chemotherapy. Like miRNAs, snoRNAs are globally down-regulated in tumor cells compared with their normal counterparts. In the present study, the snoRNA signature was robust enough to differentiate anaplastic large cell lymphoma (n = 32) from other PTCLs. For PTCL-NOS and AITL, we obtained 2 distinct prognostic signatures with a reduced set of 3 genes. Of particular interest was the prognostic value of HBII-239 snoRNA, which was significantly over-expressed in cases of AITL and PTCL-NOS that had favorable outcomes. Our results suggest that snoRNA expression profiles may have a diagnostic and prognostic significance for PTCL, offering new tools for patient care and follow-up.
Resumo:
The genome size, complexity, and ploidy of the arbuscular mycorrhizal fungus (AMF) Glomus intraradices was determined using flow cytometry, reassociation kinetics, and genomic reconstruction. Nuclei of G. intraradices from in vitro culture, were analyzed by flow cytometry. The estimated average length of DNA per nucleus was 14.07+/-3.52 Mb. Reassociation kinetics on G. intraradices DNA indicated a haploid genome size of approximately 16.54 Mb, comprising 88.36% single copy DNA, 1.59% repetitive DNA, and 10.05% fold-back DNA. To determine ploidy, the DNA content per nucleus measured by flow cytometry was compared with the genome estimate of reassociation kinetics. G. intraradices was found to have a DNA index (DNA per nucleus per haploid genome size) of approximately 0.9, indicating that it is haploid. Genomic DNA of G. intraradices was also analyzed by genomic reconstruction using four genes (Malate synthase, RecA, Rad32, and Hsp88). Because we used flow cytometry and reassociation kinetics to reveal the genome size of G. intraradices and show that it is haploid, then a similar value for genome size should be found when using genomic reconstruction as long as the genes studied are single copy. The average genome size estimate was 15.74+/-1.69 Mb indicating that these four genes are single copy per haploid genome and per nucleus of G. intraradices. Our results show that the genome size of G. intraradices is much smaller than estimates of other AMF and that the unusually high within-spore genetic variation that is seen in this fungus cannot be due to high ploidy.
Resumo:
BACKGROUND: Mediastinal lymph-node dissection was compared to systematic mediastinal lymph-node sampling in patients undergoing complete resection for non-small cell lung cancer with respect to morbidity, duration of chest tube drainage and hospitalization, survival, disease-free survival, and site of recurrence. METHODS: A consecutive series of one hundred patients with non-small-cell lung cancer, clinical stage T1-3 N0-1 after standardized staging, was divided into two groups of 50 patients each, according to the technique of intraoperative mediastinal lymph-node assessment (dissection versus sampling). Mediastinal lymph-node dissection consisted of removal of all lymphatic tissues within defined anatomic landmarks of stations 2-4 and 7-9 on the right side, and stations 4-9 on the left side according to the classification of the American Thoracic Society. Systematic mediastinal lymph-node sampling consisted of harvesting of one or more representative lymph nodes from stations 2-4 and 7-9 on the right side, and stations 4-9 on the left side. RESULTS: All patients had complete resection. A mean follow-up time of 89 months was achieved in 92 patients. The two groups of patients were comparable with respect to age, gender, performance status, tumor stage, histology, extent of lung resection, and follow-up time. No significant difference was found between both groups regarding the duration of chest tube drainage, hospitalization, and morbidity. However, dissection required a longer operation time than sampling (179 +/- 38 min versus 149 +/- 37 min, p < 0.001). There was no significant difference in overall survival between the two groups; however, patients with stage I disease had a significantly longer disease-free survival after dissection than after sampling (60.2 +/- 7 versus 44.8 +/- 8 months, p < 0.03). Local recurrence was significantly higher after sampling than after dissection in patients with stage I tumor (12.5% versus 45%, p = 0.02) and in patients with nodal tumor negative mediastinum (N0/N1 disease) (46% versus 13%, p = 0.004). CONCLUSION: Our results suggest that mediastinal lymph-node dissection may provide a longer disease-free survival in stage I non-small cell lung cancer and, most importantly, a better local tumor control than mediastinal lymph-node sampling after complete resection for N0/N1 disease without leading to increased morbidity.