898 resultados para Patterns of specialization
Resumo:
OBJECTIVE In patients with a long life expectancy with high-risk (HR) prostate cancer (PCa), the chance to die from PCa is not negligible and may change significantly according to the time elapsed from surgery. The aim of this study was to evaluate long-term survival patterns in young patients treated with radical prostatectomy (RP) for HRPCa. MATERIALS AND METHODS Within a multiinstitutional cohort, 600 young patients (≤59 years) treated with RP between 1987 and 2012 for HRPCa (defined as at least one of the following adverse characteristics: prostate specific antigen>20, cT3 or higher, biopsy Gleason sum 8-10) were identified. Smoothed cumulative incidence plot was performed to assess cancer-specific mortality (CSM) and other cause mortality (OCM) rates at 10, 15, and 20 years after RP. The same analyses were performed to assess the 5-year probability of CSM and OCM in patients who survived 5, 10, and 15 years after RP. A multivariable competing risk regression model was fitted to identify predictors of CSM and OCM. RESULTS The 10-, 15- and 20-year CSM and OCM rates were 11.6% and 5.5% vs. 15.5% and 13.5% vs. 18.4% and 19.3%, respectively. The 5-year probability of CSM and OCM rates among patients who survived at 5, 10, and 15 years after RP, were 6.4% and 2.7% vs. 4.6% and 9.6% vs. 4.2% and 8.2%, respectively. Year of surgery, pathological stage and Gleason score, surgical margin status and lymph node invasion were the major determinants of CSM (all P≤0.03). Conversely, none of the covariates was significantly associated with OCM (all P≥ 0.09). CONCLUSIONS Very long-term cancer control in young high-risk patients after RP is highly satisfactory. The probability of dying from PCa in young patients is the leading cause of death during the first 10 years of survivorship after RP. Thereafter, mortality not related to PCa became the main cause of death. Consequently, surgery should be consider among young patients with high-risk disease and strict PCa follow-up should enforce during the first 10 years of survivorship after RP.
Resumo:
INTRODUCTION External beam radiotherapy (EBRT), with or without androgen deprivation therapy (ADT), is an established treatment option for nonmetastatic prostate cancer. Despite high-level evidence from several randomized trials, risk group stratification and treatment recommendations vary due to contradictory or inconclusive data, particularly with regard to EBRT dose prescription and ADT duration. Our aim was to investigate current patterns of practice in primary EBRT for prostate cancer in Switzerland. MATERIALS AND METHODS Treatment recommendations on EBRT and ADT for localized and locally advanced prostate cancer were collected from 23 Swiss radiation oncology centers. Written recommendations were converted into center-specific decision trees, and analyzed for consensus and differences using a dedicated software tool. Additionally, specific radiotherapy planning and delivery techniques from the participating centers were assessed. RESULTS The most commonly prescribed radiation dose was 78 Gy (range 70-80 Gy) across all risk groups. ADT was recommended for intermediate-risk patients for 6 months in over 80 % of the centers, and for high-risk patients for 2 or 3 years in over 90 % of centers. For recommendations on combined EBRT and ADT treatment, consensus levels did not exceed 39 % in any clinical scenario. Arc-based intensity-modulated radiotherapy (IMRT) is implemented for routine prostate cancer radiotherapy by 96 % of the centers. CONCLUSION Among Swiss radiation oncology centers, considerable ranges of radiotherapy dose and ADT duration are routinely offered for localized and locally advanced prostate cancer. In the vast majority of cases, doses and durations are within the range of those described in current evidence-based guidelines.
Resumo:
Despite moderate improvements in outcome of glioblastoma after first-line treatment with chemoradiation recent clinical trials failed to improve the prognosis of recurrent glioblastoma. In the absence of a standard of care we aimed to investigate institutional treatment strategies to identify similarities and differences in the pattern of care for recurrent glioblastoma. We investigated re-treatment criteria and therapeutic pathways for recurrent glioblastoma of eight neuro-oncology centres in Switzerland having an established multidisciplinary tumour-board conference. Decision algorithms, differences and consensus were analysed using the objective consensus methodology. A total of 16 different treatment recommendations were identified based on combinations of eight different decision criteria. The set of criteria implemented as well as the set of treatments offered was different in each centre. For specific situations, up to 6 different treatment recommendations were provided by the eight centres. The only wide-range consensus identified was to offer best supportive care to unfit patients. A majority recommendation was identified for non-operable large early recurrence with unmethylated MGMT promoter status in the fit patients: here bevacizumab was offered. In fit patients with late recurrent non-operable MGMT promoter methylated glioblastoma temozolomide was recommended by most. No other majority recommendations were present. In the absence of strong evidence we identified few consensus recommendations in the treatment of recurrent glioblastoma. This contrasts the limited availability of single drugs and treatment modalities. Clinical situations of greatest heterogeneity may be suitable to be addressed in clinical trials and second opinion referrals are likely to yield diverging recommendations.
Resumo:
The development of topography depends mainly on the interplay between uplift and erosion. These processes are controlled by various factors including climate, glaciers, lithology, seismic activity and short-term variables, such as anthropogenic impact. Many studies in orogens all over the world have shown how these controlling variables may affect the landscape's topography. In particular, it has been hypothesized that lithology exerts a dominant control on erosion rates and landscape morphology. However, clear demonstrations of this influence are rare and difficult to disentangle from the overprint of other signals such as climate or tectonics. In this study we focus on the upper Rhône Basin situated in the Central Swiss Alps in order to explore the relation between topography, possible controlling variables and lithology in particular. The Rhône Basin has been affected by spatially variable uplift, high orographically driven rainfalls and multiple glaciations. Furthermore, lithology and erodibility vary substantially within the basin. Thanks to high-resolution geological, climatic and topographic data, the Rhône Basin is a suitable laboratory to explore these complexities. Elevation, relief, slope and hypsometric data as well as river profile information from digital elevation models are used to characterize the landscape's topography of around 50 tributary basins. Additionally, uplift over different timescales, glacial inheritance, precipitation patterns and erodibility of the underlying bedrock are quantified for each basin. Results show that the chosen topographic and controlling variables vary remarkably between different tributary basins. We investigate the link between observed topographic differences and the possible controlling variables through statistical analyses. Variations of elevation, slope and relief seem to be linked to differences in long-term uplift rate, whereas elevation distributions (hypsometry) and river profile shapes may be related to glacial imprint. This confirms that the landscape of the Rhône Basin has been highly preconditioned by (past) uplift and glaciation. Linear discriminant analyses (LDAs), however, suggest a stronger link between observed topographic variations and differences in erodibility. We therefore conclude that despite evident glacial and tectonic conditioning, a lithologic control is still preserved and measurable in the landscape of the Rhône tributary basins.
Resumo:
After the collapse of the Soviet Union and Yugoslavia, a number of actors started to engage in the power struggle for the opportunities to shape the new order in successive nation-states. In Serbia and Georgia historically hegemonic Orthodox Christian churches were among the firsts in the frontlines for political and economic power. More than a decade has passed since the so-called Coloured Revolutions in Georgia and Serbia, and the Orthodox churches still remain participants of an ongoing socio-political transition of these states. The revival of public role of religion appeared temporary in Serbia followed by a gradual decline of an influence of the Orthodox Church over political life and legal process. However, in Georgia the public and political role of religion increased rather than declined albeit changed shape. Examining the degree to which the two Orthodox churches can influence the political agenda in Serbia and Georgia, the paper attempts to understand how church-State relations work in practice. By bringing rich empirical data from the field (70 interviews with (arch)bishops, priests and religious clerics in Georgia and Serbia added to field observations), the paper reflects on the themes under which the two Orthodox churches mobilize public protest in Serbia and Georgia. The paper further looks at varying State responses and their broader implication for church-state problematique.
Resumo:
Theoretical and empirical studies were conducted on the pattern of nucleotide and amino acid substitution in evolution, taking into account the effects of mutation at the nucleotide level and purifying selection at the amino acid level. A theoretical model for predicting the evolutionary change in electrophoretic mobility of a protein was also developed by using information on the pattern of amino acid substitution. The specific problems studied and the main results obtained are as follows: (1) Estimation of the pattern of nucleotide substitution in DNA nuclear genomes. The pattern of point mutations and nucleotide substitutions among the four different nucleotides are inferred from the evolutionary changes of pseudogenes and functional genes, respectively. Both patterns are non-random, the rate of change varying considerably with nucleotide pair, and that in both cases transitions occur somewhat more frequently than transversions. In protein evolution, substitution occurs more often between amino acids with similar physico-chemical properties than between dissimilar amino acids. (2) Estimation of the pattern of nucleotide substitution in RNA genomes. The majority of mutations in retroviruses accumulate at the reverse transcription stage. Selection at the amino acid level is very weak, and almost non-existent between synonymous codons. The pattern of mutation is very different from that in DNA genomes. Nevertheless, the pattern of purifying selection at the amino acid level is similar to that in DNA genomes, although selection intensity is much weaker. (3) Evaluation of the determinants of molecular evolutionary rates in protein-coding genes. Based on rates of nucleotide substitution for mammalian genes, the rate of amino acid substitution of a protein is determined by its amino acid composition. The content of glycine is shown to correlate strongly and negatively with the rate of substitution. Empirical formulae, called indices of mutability, are developed in order to predict the rate of molecular evolution of a protein from data on its amino acid sequence. (4) Studies on the evolutionary patterns of electrophoretic mobility of proteins. A theoretical model was constructed that predicts the electric charge of a protein at any given pH and its isoelectric point from data on its primary and quaternary structures. Using this model, the evolutionary change in electrophoretic mobilities of different proteins and the expected amount of electrophoretically hidden genetic variation were studied. In the absence of selection for the pI value, proteins will on the average evolve toward a mildly basic pI. (Abstract shortened with permission of author.) ^
Resumo:
DNA sequence variation is currently a major source of data for studying human origins, evolution, and demographic history, and for detecting linkage association of complex diseases. In this dissertation, I investigated DNA variation in worldwide populations from two ∼10 kb autosomal regions on 22q11.2 (noncoding) and 1q24 (introns). A total of 75 variant sites were found among 128 human sequences in the 22q11.2 region, yielding an estimate of 0.088% for nucleotide diversity (π), and a total of 52 variant sites were found among 122 human sequences in the 1q24 region with an estimated π value of 0.057%. The data from these two regions and a 10 kb noncoding region on Xq13.3 all show a strong excess of low-frequency variants in comparison to that expected from an equilibrium population, indicating a relatively recent population expansion. The effective population sizes estimated from the three regions were 11,000, 12,700, and 8,600, respectively, which are close to the commonly used value of 10,000. In each of the two autosomal regions, the age of the most recent common ancestor (MRCA) was estimated to be older than 1 million years among all the sequences and ∼600,000 years among non-African sequences, providing first evidence from autosomal noncoding or intronic regions for a genetic history of humans much more ancient than the emergence of modern humans. The ancient genetic history of humans indicates no severe bottleneck during the evolution of humans in the last half million years; otherwise, much of the ancient genetic history would have been lost during a severe bottleneck. This study strongly suggests that both the “out of Africa” and the multiregional models are too simple for explaining the evolution of modern humans. A compilation of genome-wide data revealed that nucleotide diversity is highest in autosomal regions, intermediate in X-linked regions, and lowest in Y-linked regions. The data suggest the existence of background selection or selective sweep on Y-linked loci. In general, the nucleotide diversity in humans is low compared to that in chimpanzee and Drosophila populations. ^
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
When the Shakers established communal farms in the Ohio Valley, they encountered a new agricultural environment that was substantially different from the familiar soils, climates, and markets of New England and the Hudson Valley. The ways in which their response to these new conditions differed by region has not been well documented. We examine patterns of specialization among the Shakers using the manuscript schedules of the federal Agricultural Censuses from 1850 through 1880. For each Shaker unit, we also recorded a random sample of five farms in the same township (or all available farms if there were fewer than five). The sample of neighboring farms included 75 in 1850, 70 in the next two census years, and 66 in 1880. A Herfindahl-type index suggested that, although the level of specialization was less among the Shakers than their neighbors, trends in specialization by the Shakers and their neighbors were remarkably similar when considered by region. Both Eastern and Western Shakers were more heavily committed to dairy and produce than were their neighbors, while Western Shakers produced more grains than did Eastern Shakers, a pattern imitated in nearby family farms. Livestock and related production was far more important to the Eastern Shakers than to the Western Shakers, again similar to patterns in the census returns from other farms. We conclude that, despite the obvious scale and organizational differences, Shaker production decisions were based on the same comparative advantages that determined production decisions of family farms.
Resumo:
A retrospective cohort study was conducted among 1542 patients diagnosed with CLL between 1970 and 2001 at the M. D. Anderson Cancer Center (MDACC). Changes in clinical characteristics and the impact of CLL on life expectancy were assessed across three decades (1970–2001) and the role of clinical factors on prognosis of CLL were evaluated among patients diagnosed between 1985 and 2001 using Kaplan-Meier and Cox proportional hazards method. Among 1485 CLL patients diagnosed from 1970 to 2001, patients in the recent cohort (1985–2001) were diagnosed at a younger age and an earlier stage compared to the earliest cohort (1970–1984). There was a 44% reduction in mortality among patients diagnosed in 1985–1995 compared to those diagnosed in 1970–1984 after adjusting for age, sex and Rai stage among patients who ever received treatment. There was an overall 11 years (5 years for stage 0) loss of life expectancy among 1485 patients compared with the expected life expectancy based on the age-, sex- and race-matched US general population, with a 43% decrease in the 10-year survival rate. Abnormal cytogenetics was associated with shorter progression-free (PF) survival after adjusting for age, sex, Rai stage and beta-2 microglobulin (beta-2M); whereas, older age, abnormal cytogenetics and a higher beta-2M level were adverse predictors for overall survival. No increased risk of second cancer overall was observed, however, patients who received treatment for CLL had an elevated risk of developing AML and HD. Two out of three patients who developed AML were treated with alkylating agents. In conclusion, CLL patients had improved survival over time. The identification of clinical predictors of PF/overall survival has important clinical significance. Close surveillance of the development of second cancer is critical to improve the quality of life of long-term survivors. ^
Resumo:
Usual food choices during the past year, self-reported changes in consumption of three important food groups, and weight changes or stability were the questions addressed in this cross-sectional survey and retrospective review. The subjects were 141 patients with Hodgkin's disease or other B-cell types of lymphoma within their first three years following completion of initial treatments for lymphoma at the University of Texas M. D. Anderson Cancer Center in Houston, Texas. ^ The previously validated Block-98 Food Frequency Questionnaire was used to estimate usual food choices during the past year. Supplementary questions asked about changes breads and cereals (white or whole grain) and relative amounts of fruits and vegetables compared with before diagnosis and treatment. Over half of the subjects reported consuming more whole grains, fruits, and/or vegetables and almost three quarters of those not reporting such changes had been consuming whole grains before diagnosis and treatment. ^ Various dietary patterns were defined in order to learn whether proportionately more patients who changed in healthy directions fulfilled recognized nutritional guidelines such as 5-A-day fruits and vegetables and Dietary Reference Intakes (DRIB) for selected nutrients. ^ Small sizes of dietary pattern sub-groups limited the power of this study to detect differences in meeting recommended dietary guidelines. Nevertheless, insufficient and excessive intakes were detected among individuals with respect to fruits and vegetables, fats, calcium, selenium, iron, folate, and Vitamin A. The prevalence of inadequate or excess intakes of foods or nutrients even among those who perceived that they had increased or continued to eat whole grains and/or fruits and vegetables is of concern because of recognized effects upon general health and potential cancer related effects. ^ Over half of the subjects were overweight or obese (by BMI category) on their first visit to this cancer center and that proportion increased to almost three-quarters by their last follow-up visits. Men were significantly heavier than women, but no other significant differences in BMI measures were found even after accounting for prescribed steroids and dietary patterns. ^
Resumo:
The situational and interpersonal characteristics of homicides occurring in Houston, Texas, during 1987 were investigated. A total of 328 cases were ascertained from the linking of police computer data, medical examiner's records, and death certificate information. The medical examiner's records contained all of the ascertained cases. The comparability ratio between the medical examiner's records and police and vital statistic data was 1.03 and 0.966, respectively. Data inconsistencies were found between the three information sources on Spanish surname, age, race/ethnicity, external cause of death coding, alcohol and drug involvement, weapon/method used, and Hispanic immigration status. Recommendations for improving the quality of homicide information gathered and for linking homicide surveillance systems were made.^ Males constituted 82% of all victims. The age-adjusted homicide rate for Blacks was 31.1 per 100,000 population, for Hispanics 19.2, and for Anglos 5.4. Among males, Blacks had an age-adjusted rate of 54.5, Hispanics, 31.0, and Anglos 7.5. Among females, Blacks had an age-adjusted rate of 9.3, Hispanics 6.1, and Anglos 3.1. Black males, ages 25-34, had the highest homicide rate, at 96.5.^ Half of all homicides occurred in a residence. Among Hispanic males, homicides occurred most often in the street. Firearms were used to commit 64% of the homicides. Arguments preceded 58% of all cases. Nearly two-thirds of the victims knew their assailant. Only 15% of males compared to 62% of females were killed by a spouse, an intimate acquaintance, or a family member. Blacks (93%) and Hispanics (88%) were more likely than Anglos (70%) to have been killed by persons of the same race/ethnicity. Nearly three-fourths of all Houston Hispanic homicide victims were foreign born.^ Alcohol was detected in 47% of the victims tested. Nearly one-third of those tested had blood alcohol concentrations (BACs) greater than 100 mg%. Males (53%) were more likely than females (20%) to have positive BACs. Hispanic males (64%) were more likely to have detectable BACs than either Black (51%) or Anglo (44%) males.^ Illegal drugs were detected in 20% of the victims tested. One-fourth of the victims who tested positive for drugs had more than one drug in their system at death. The stimulant cocaine was the most commonly detected drug, comprising 53% of all illegal drugs identified.^ Recommendations for the primary, secondary, and tertiary prevention of homicide and for future homicide research are made. ^