105 resultados para Heterogeneous regression
Resumo:
BACKGROUND: Whole pelvis intensity modulated radiotherapy (IMRT) is increasingly being used to treat cervical cancer aiming to reduce side effects. Encouraged by this, some groups have proposed the use of simultaneous integrated boost (SIB) to target the tumor, either to get a higher tumoricidal effect or to replace brachytherapy. Nevertheless, physiological organ movement and rapid tumor regression throughout treatment might substantially reduce any benefit of this approach. PURPOSE: To evaluate the clinical target volume - simultaneous integrated boost (CTV-SIB) regression and motion during chemo-radiotherapy (CRT) for cervical cancer, and to monitor treatment progress dosimetrically and volumetrically to ensure treatment goals are met. METHODS AND MATERIALS: Ten patients treated with standard doses of CRT and brachytherapy were retrospectively re-planned using a helical Tomotherapy - SIB technique for the hypothetical scenario of this feasibility study. Target and organs at risk (OAR) were contoured on deformable fused planning-computed tomography and megavoltage computed tomography images. The CTV-SIB volume regression was determined. The center of mass (CM) was used to evaluate the degree of motion. The Dice's similarity coefficient (DSC) was used to assess the spatial overlap of CTV-SIBs between scans. A cumulative dose-volume histogram modeled estimated delivered doses. RESULTS: The CTV-SIB relative reduction was between 31 and 70%. The mean maximum CM change was 12.5, 9, and 3 mm in the superior-inferior, antero-posterior, and right-left dimensions, respectively. The CTV-SIB-DSC approached 1 in the first week of treatment, indicating almost perfect overlap. CTV-SIB-DSC regressed linearly during therapy, and by the end of treatment was 0.5, indicating 50% discordance. Two patients received less than 95% of the prescribed dose. Much higher doses to the OAR were observed. A multiple regression analysis showed a significant interaction between CTV-SIB reduction and OAR dose increase. CONCLUSIONS: The CTV-SIB had important regression and motion during CRT, receiving lower therapeutic doses than expected. The OAR had unpredictable shifts and received higher doses. The use of SIB without frequent adaptation of the treatment plan exposes cervical cancer patients to an unpredictable risk of under-dosing the target and/or overdosing adjacent critical structures. In that scenario, brachytherapy continues to be the gold standard approach.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
The stable co-existence of two haploid genotypes or two species is studied in a spatially heterogeneous environment submitted to a mixture of soft selection (within-patch regulation) and hard selection (outside-patch regulation) and where two kinds of resource are available. This is analysed both at an ecological time-scale (short term) and at an evolutionary time-scale (long term). At an ecological scale, we show that co-existence is very unlikely if the two competitors are symmetrical specialists exploiting different resources. In this case, the most favourable conditions are met when the two resources are equally available, a situation that should favour generalists at an evolutionary scale. Alternatively, low within-patch density dependence (soft selection) enhances the co-existence between two slightly different specialists of the most available resource. This results from the opposing forces that are acting in hard and soft regulation modes. In the case of unbalanced accessibility to the two resources, hard selection favours the most specialized genotype, whereas soft selection strongly favours the less specialized one. Our results suggest that competition for different resources may be difficult to demonstrate in the wild even when it is a key factor in the maintenance of adaptive diversity. At an evolutionary scale, a monomorphic invasive evolutionarily stable strategy (ESS) always exists. When a linear trade-off exists between survival in one habitat versus that in another, this ESS lies between an absolute adjustment of survival to niche size (for mainly soft-regulated populations) and absolute survival (specialization) in a single niche (for mainly hard-regulated populations). This suggests that environments in agreement with the assumptions of such models should lead to an absence of adaptive variation in the long term.
Resumo:
Intra-specific colour polymorphism provides a cryptic camouflage from predators in heterogeneous habitats. The orthoptera species, Acrida ungarica (Herbst, 1786) possess two well-distinguished colour morphs: brown and green and displays several disruptive colouration patterns within each morph to improve the crypsis. This study focused on how the features of the background environment relate to the proportion of the two morphs and to the intensity of disruptive colouration patterns in A. ungarica. As the two sexes are very distinct with respect to mass and length, we also distinctively tested the relationship for each sex. In accordance with the background matching hypothesis, we found that, for both sexes, the brown morph was in higher proportion at sites with a brown-dominant environment, and green morphs were in higher proportion in green-dominant environments. Globally, individuals in drier sites and in the drier year also had more intense disruptive colouration patterns, and brown morphs and females were also more striped. Colour patterns differed largely between populations and were significantly correlated with relevant environmental features. Even if A. ungarica is a polymorphic specialist, disruptive colouration still appears to provide strong benefits, particularly in some habitats. Moreover, because females are larger, they are less able to flee, which might explain the difference between sexes
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
A Knudsen flow reactor has been used to quantify surface functional groups on aerosols collected in the field. This technique is based on a heterogeneous titration reaction between a probe gas and a specific functional group on the particle surface. In the first part of this work, the reactivity of different probe gases on laboratory-generated aerosols (limonene SOA, Pb(NO3)2, Cd(NO3)2) and diesel reference soot (SRM 2975) has been studied. Five probe gases have been selected for the quantitative determination of important functional groups: N(CH3)3 (for the titration of acidic sites), NH2OH (for carbonyl functions), CF3COOH and HCl (for basic sites of different strength), and O3 (for oxidizable groups). The second part describes a field campaign that has been undertaken in several bus depots in Switzerland, where ambient fine and ultrafine particles were collected on suitable filters and quantitatively investigated using the Knudsen flow reactor. Results point to important differences in the surface reactivity of ambient particles, depending on the sampling site and season. The particle surface appears to be multi-functional, with the simultaneous presence of antagonistic functional groups which do not undergo internal chemical reactions, such as acid-base neutralization. Results also indicate that the surface of ambient particles was characterized by a high density of carbonyl functions (reactivity towards NH2OH probe in the range 0.26-6 formal molecular monolayers) and a low density of acidic sites (reactivity towards N(CH3)3 probe in the range 0.01-0.20 formal molecular monolayer). Kinetic parameters point to fast redox reactions (uptake coefficient ?0>10-3 for O3 probe) and slow acid-base reactions (?0<10-4 for N(CH3)3 probe) on the particle surface. [Authors]
Resumo:
Six gases (N((CH3)3), NH2OH, CF3COOH, HCl, NO2, O3) were selected to probe the surface of seven combustion aerosol (amorphous carbon, flame soot) and three types of TiO2 nanoparticles using heterogeneous, that is gas-surface reactions. The gas uptake to saturation of the probes was measured under molecular flow conditions in a Knudsen flow reactor and expressed as a density of surface functional groups on a particular aerosol, namely acidic (carboxylic) and basic (conjugated oxides such as pyrones, N-heterocycles) sites, carbonyl (R1-C(O)-R2) and oxidizable (olefinic, -OH) groups. The limit of detection was generally well below 1% of a formal monolayer of adsorbed probe gas. With few exceptions most investigated aerosol samples interacted with all probe gases which points to the coexistence of different functional groups on the same aerosol surface such as acidic and basic groups. Generally, the carbonaceous particles displayed significant differences in surface group density: Printex 60 amorphous carbon had the lowest density of surface functional groups throughout, whereas Diesel soot recovered from a Diesel particulate filter had the largest. The presence of basic oxides on carbonaceous aerosol particles was inferred from the ratio of uptakes of CF3COOH and HCl owing to the larger stability of the acetate compared to the chloride counterion in the resulting pyrylium salt. Both soots generated from a rich and a lean hexane diffusion flame had a large density of oxidizable groups similar to amorphous carbon FS 101. TiO2 15 had the lowest density of functional groups among the three studied TiO2 nanoparticles for all probe gases despite the smallest size of its primary particles. The used technique enabled the measurement of the uptake probability of the probe gases on the various supported aerosol samples. The initial uptake probability, g0, of the probe gas onto the supported nanoparticles differed significantly among the various investigated aerosol samples but was roughly correlated with the density of surface groups, as expected. [Authors]
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
Resumo:
The relationship between hypoxic stress, autophagy, and specific cell-mediated cytotoxicity remains unknown. This study shows that hypoxia-induced resistance of lung tumor to cytolytic T lymphocyte (CTL)-mediated lysis is associated with autophagy induction in target cells. In turn, this correlates with STAT3 phosphorylation on tyrosine 705 residue (pSTAT3) and HIF-1α accumulation. Inhibition of autophagy by siRNA targeting of either beclin1 or Atg5 resulted in impairment of pSTAT3 and restoration of hypoxic tumor cell susceptibility to CTL-mediated lysis. Furthermore, inhibition of pSTAT3 in hypoxic Atg5 or beclin1-targeted tumor cells was found to be associated with the inhibition Src kinase (pSrc). Autophagy-induced pSTAT3 and pSrc regulation seemed to involve the ubiquitin proteasome system and p62/SQSTM1. In vivo experiments using B16-F10 melanoma tumor cells indicated that depletion of beclin1 resulted in an inhibition of B16-F10 tumor growth and increased tumor apoptosis. Moreover, in vivo inhibition of autophagy by hydroxychloroquine in B16-F10 tumor-bearing mice and mice vaccinated with tyrosinase-related protein-2 peptide dramatically increased tumor growth inhibition. Collectively, this study establishes a novel functional link between hypoxia-induced autophagy and the regulation of antigen-specific T-cell lysis and points to a major role of autophagy in the control of in vivo tumor growth.
Resumo:
The determination of line crossing sequences between rollerball pens and laser printers presents difficulties that may not be overcome using traditional techniques. This research aimed to study the potential of digital microscopy and 3-D laser profilometry to determine line crossing sequences between a toner and an aqueous ink line. Different paper types, rollerball pens, and writing pressure were tested. Correct opinions of the sequence were given for all case scenarios, using both techniques. When the toner was printed before the ink, a light reflection was observed in all crossing specimens, while this was never observed in the other sequence types. The 3-D laser profilometry, more time-consuming, presented the main advantage of providing quantitative results. The findings confirm the potential of the 3-D laser profilometry and demonstrate the efficiency of digital microscopy as a new technique for determining the sequence of line crossings involving rollerball pen ink and toner. With the mass marketing of laser printers and the popularity of rollerball pens, the determination of line crossing sequences between such instruments is encountered by forensic document examiners. This type of crossing presents difficulties with optical microscopic line crossing techniques involving ballpoint pens or gel pens and toner (1-4). Indeed, the rollerball's aqueous ink penetrates through the toner and is absorbed by the fibers of the paper, leaving the examiner with the impression that the toner is above the ink even when it is not (5). Novotny and Westwood (3) investigated the possibility of determining aqueous ink and toner crossing sequences by microscopic observation of the intersection before and after toner removal. A major disadvantage of their study resides in destruction of the sample by scraping off the toner line to see what was underneath. The aim of this research was to investigate the ways to overcome these difficulties through digital microscopy and three-dimensional (3-D) laser profilometry. The former was used as a technique for the determination of sequences between gel pen and toner printing strokes, but provided less conclusive results than that of an optical stereomicroscope (4). 3-D laser profilometry, which allows one to observe and measure the topography of a surface, has been the subject of a number of recent studies in this area. Berx and De Kinder (6) and Schirripa Spagnolo (7,8) have tested the application of laser profilometry to determine the sequence of intersections of several lines. The results obtained in these studies overcome disadvantages of other methods applied in this area, such as scanning electron microscope or the atomic force microscope. The main advantages of 3-D laser profilometry include the ease of implementation of the technique and its nondestructive nature, which does not require sample preparation (8-10). Moreover, the technique is reproducible and presents a high degree of freedom in the vertical axes (up to 1000 μm). However, when the paper surface presents a given roughness, if the pen impressions alter the paper with a depth similar to the roughness of medium, the results are not always conclusive (8). It becomes difficult in this case to distinguish which characteristics can be imputed to the pen impressions or the quality of the paper surface. This important limitation is assessed by testing different types of paper of variable quality (of different grammage and finishing) and the writing pressure. The authors will therefore assess the limits of 3-D laser profilometry technique and determine whether the method can overcome such constraints. Second, the authors will investigate the use of digital microscopy because it presents a number of advantages: it is efficient, user-friendly, and provides an objective evaluation and interpretation.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
The effect of heterogeneous environments upon the dynamics of invasion and the eradication or control of invasive species is poorly understood, although it is a major challenge for biodiversity conservation. Here, we first investigate how the probability and time for invasion are affected by spatial heterogeneity. Then, we study the effect of control program strategies (e.g. species specificity, spatial scale of action, detection and eradication efficiency) on the success and time of eradication. We find that heterogeneity increases both the invasion probability and the time to invasion. Heterogeneity also reduces the probability of eradication but does not change the time taken for successful eradication. We confirm that early detection of invasive species reduces the time until eradication, but we also demonstrate that this is true only if the local control action is sufficiently efficient. The criterion of removal efficiency is even more important for an eradication program than simply ensuring control effort when the invasive species is not abundant.
Resumo:
Cuscuta spp. are holoparasitic plants that can simultaneously parasitise several host plants. It has been suggested that Cuscuta has evolved a foraging strategy based on a positive relationship between preuptake investment and subsequent reward on different host species. Here we establish reliable parasite size measures and show that parasitism on individuals of different host species alters the biomass of C. campestris but that within host species size and age also contributes to the heterogeneous resource landscape. We then performed two additional experiments to test whether C. campestris achieves greater resource acquisition by parasitising two host species rather than one and whether C. campestris forages in communities of hosts offering different rewards (a choice experiment). There was no evidence in either experiment for direct benefits of a mixed host diet. Cuscuta campestris foraged by parasitising the most rewarding hosts the fastest and then investing the most on them. We conclude that our data present strong evidence for foraging in the parasitic plant C. campestris.