1000 resultados para Martín Biedma
Resumo:
INTRODUCTION Radiotherapy outcomes might be further improved by a greater understanding of the individual variations in normal tissue reactions that determine tolerance. Most published studies on radiation toxicity have been performed retrospectively. Our prospective study was launched in 1996 to measure the in vitro radiosensitivity of peripheral blood lymphocytes before treatment with radical radiotherapy in patients with breast cancer, and to assess the early and the late radiation skin side effects in the same group of patients. We prospectively recruited consecutive breast cancer patients receiving radiation therapy after breast surgery. To evaluate whether early and late side effects of radiotherapy can be predicted by the assay, a study was conducted of the association between the results of in vitro radiosensitivity tests and acute and late adverse radiation effects. METHODS Intrinsic molecular radiosensitivity was measured by using an initial radiation-induced DNA damage assay on lymphocytes obtained from breast cancer patients before radiotherapy. Acute reactions were assessed in 108 of these patients on the last treatment day. Late morbidity was assessed after 7 years of follow-up in some of these patients. The Radiation Therapy Oncology Group (RTOG) morbidity score system was used for both assessments. RESULTS Radiosensitivity values obtained using the in vitro test showed no relation with the acute or late adverse skin reactions observed. There was no evidence of a relation between acute and late normal tissue reactions assessed in the same patients. A positive relation was found between the treatment volume and both early and late side effects. CONCLUSION After radiation treatment, a number of cells containing major changes can have a long survival and disappear very slowly, becoming a chronic focus of immunological system stimulation. This stimulation can produce, in a stochastic manner, late radiation-related adverse effects of varying severity. Further research is warranted to identify the major determinants of normal tissue radiation response to make it possible to individualize treatments and improve the outcome of radiotherapy in cancer patients.
Resumo:
Publicado en la página web de la ConsejerÃa de Salud: www.juntadeandalucia.es/salud (ConsejerÃa de Salud / Profesionales / Nuestro Compromiso por la Calidad / Procesos Asistenciales Integrados)
Resumo:
El CDROM incluye las transparencias que hay al final de cada cuaderno, su contenido se centra en cada una de las actividades de las sesiones de trabajo.
Resumo:
Publicado en la página web de la ConsejerÃa de Salud y Bienestar Social: www.juntadeandalucia.es/salud (ConsejerÃa de Salud y Bienestar Social / Profesionales / Nuestro Compromiso por la Calidad / Procesos Asistenciales Integrados)
Resumo:
This analysis was stimulated by the real data analysis problem of householdexpenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that tryto add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spendingexcluding alcohol/tobacco similar for teetotal and non-teetotal households?In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than onecomponent, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durableswithin the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small.While this analysis is based on around economic data, the ideas carry over tomany other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
Examples of compositional data. The simplex, a suitable sample space for compositional data and Aitchison's geometry. R, a free language and environment for statistical computing and graphics
Resumo:
Publicado en la página web de la ConsejerÃa de Salud y Bienestar Social: www.juntadeandalucia.es/salud (ConsejerÃa de Salud y Bienestar Social/ Profesionales / Salud Pública /Prevención / Atención Temprana /)
Resumo:
We shall call an n × p data matrix fully-compositional if the rows sum to a constant, and sub-compositional if the variables are a subset of a fully-compositional data set1. Such data occur widely in archaeometry, where it is common to determine the chemical composition of ceramic, glass, metal or other artefacts using techniques such as neutron activation analysis (NAA), inductively coupled plasma spectroscopy (ICPS), X-ray fluorescence analysis (XRF) etc. Interest often centres on whether there are distinct chemical groups within the data and whether, for example, these can be associated with different origins or manufacturing technologies
Resumo:
Presentation in CODAWORK'03, session 4: Applications to archeometry
Resumo:
Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes
Resumo:
The aim of this study was to examine the parasite remains present in rodent coprolites collected from the archaeological site Alero Destacamento Guardaparque (ADG) located in the Perito Moreno National Park (Santa Cruz Province, 47º57'S 72º05'W). Forty-eight coprolites were obtained from the layers 7, 6 and 5 of ADG, dated at 6,700 ± 70, 4,900 ± 70 and 3,440 ± 70 years BP, respectively. The faecal samples were processed and examined using paleoparasitological procedures. A total of 582 eggs of parasites were found in 47 coprolites. Samples were positive for eggs of Trichuris sp. (Nematoda: Trichuridae), Calodium sp., Eucoleus sp., Echinocoleus sp. and an unidentified capillariid (Nematoda: Capillariidae) and for eggs of Monoecocestus (Cestoda: Anoplocephalidae). Quantitative differences among layer for both coprolites and parasites were recorded. In this study, the specific filiations of parasites, their zoonotic importance, the rodent identity, on the basis of previous zooarchaeological knowledge, and the environmental conditions during the Holocene in the area are discussed.
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principalcomponent analysis allow to model compositional changes compared with a reference point.The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging.When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positivevariables, has no straightforward way to build consistent and optimal confidence intervals for estimates.These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
In standard multivariate statistical analysis common hypotheses of interest concern changes in mean vectors and subvectors. In compositional data analysis it is now well established that compositional change is most readily described in terms of the simplicial operation of perturbation and that subcompositions replace the marginal concept of subvectors. To motivate the statistical developments of this paper we present two challenging compositional problems from food production processes.Against this background the relevance of perturbations and subcompositions can beclearly seen. Moreover we can identify a number of hypotheses of interest involvingthe specification of particular perturbations or differences between perturbations and also hypotheses of subcompositional stability. We identify the two problems as being the counterpart of the analysis of paired comparison or split plot experiments and of separate sample comparative experiments in the jargon of standard multivariate analysis. We then develop appropriate estimation and testing procedures for a complete lattice of relevant compositional hypotheses
Resumo:
Publicado en la página web de la ConsejerÃa de Igualdad, Salud y PolÃticas Sociales: www.juntadeandalucia.es/salud (ConsejerÃa de Salud / Profesionales / Nuestro Compromiso por la Calidad / Procesos Asistenciales Integrados)