136 resultados para Effective-medium Approximation
Resumo:
PURPOSE: Whole tumor lysates are promising antigen sources for dendritic cell (DC) therapy as they contain many relevant immunogenic epitopes to help prevent tumor escape. Two common methods of tumor lysate preparations are freeze-thaw processing and UVB irradiation to induce necrosis and apoptosis, respectively. Hypochlorous acid (HOCl) oxidation is a new method for inducing primary necrosis and enhancing the immunogenicity of tumor cells. EXPERIMENTAL DESIGN: We compared the ability of DCs to engulf three different tumor lysate preparations, produce T-helper 1 (TH1)-priming cytokines and chemokines, stimulate mixed leukocyte reactions (MLR), and finally elicit T-cell responses capable of controlling tumor growth in vivo. RESULTS: We showed that DCs engulfed HOCl-oxidized lysate most efficiently stimulated robust MLRs, and elicited strong tumor-specific IFN-γ secretions in autologous T cells. These DCs produced the highest levels of TH1-priming cytokines and chemokines, including interleukin (IL)-12. Mice vaccinated with HOCl-oxidized ID8-ova lysate-pulsed DCs developed T-cell responses that effectively controlled tumor growth. Safety, immunogenicity of autologous DCs pulsed with HOCl-oxidized autologous tumor lysate (OCDC vaccine), clinical efficacy, and progression-free survival (PFS) were evaluated in a pilot study of five subjects with recurrent ovarian cancer. OCDC vaccination produced few grade 1 toxicities and elicited potent T-cell responses against known ovarian tumor antigens. Circulating regulatory T cells and serum IL-10 were also reduced. Two subjects experienced durable PFS of 24 months or more after OCDC. CONCLUSIONS: This is the first study showing the potential efficacy of a DC vaccine pulsed with HOCl-oxidized tumor lysate, a novel approach in preparing DC vaccine that is potentially applicable to many cancers. Clin Cancer Res; 19(17); 4801-15. ©2013 AACR.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Synthesis of polyhydroxyalkanoates (PHAs) from intermediates of fatty acid beta-oxidation was used as a tool to study fatty acid degradation in developing seeds of Arabidopsis. Transgenic plants expressing a peroxisomal PHA synthase under the control of a napin promoter accumulated PHA in developing seeds to a final level of 0. 06 mg g(-1) dry weight. In plants co-expressing a plastidial acyl-acyl carrier protein thioesterase from Cuphea lanceolata and a peroxisomal PHA synthase, approximately 18-fold more PHA accumulated in developing seeds. The proportion of 3-hydroxydecanoic acid monomer in the PHA was strongly increased, indicating a large flow of capric acid toward beta-oxidation. Furthermore, expression of the peroxisomal PHA synthase in an Arabidopsis mutant deficient in the enzyme diacylglycerol acyltransferase resulted in a 10-fold increase in PHA accumulation in developing seeds. These data indicate that plants can respond to the inadequate incorporation of fatty acids into triacylglycerides by recycling the fatty acids via beta-oxidation and that a considerable flow toward beta-oxidation can occur even in a plant tissue primarily devoted to the accumulation of storage lipids.
Resumo:
RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.
Resumo:
Whether a higher dose of a long-acting angiotensin II receptor blocker (ARB) can provide as much blockade of the renin-angiotensin system over a 24-hour period as the combination of an angiotensin-converting enzyme inhibitor and a lower dose of ARB has not been formally demonstrated so far. In this randomized double-blind study we investigated renin-angiotensin system blockade obtained with 3 doses of olmesartan medoxomil (20, 40, and 80 mg every day) in 30 normal subjects and compared it with that obtained with lisinopril alone (20 mg every day) or combined with olmesartan medoxomil (20 or 40 mg). Each subject received 2 dose regimens for 1 week according to a crossover design with a 1-week washout period between doses. The primary endpoint was the degree of blockade of the systolic blood pressure response to angiotensin I 24 hours after the last dose after 1 week of administration. At trough, the systolic blood pressure response to exogenous angiotensin I was 58% +/- 19% with 20 mg lisinopril (mean +/- SD), 58% +/- 11% with 20 mg olmesartan medoxomil, 62% +/- 16% with 40 mg olmesartan medoxomil, and 76% +/- 12% with the highest dose of olmesartan medoxomil (80 mg) (P = .016 versus 20 mg lisinopril and P = .0015 versus 20 mg olmesartan medoxomil). With the combinations, blockade was 80% +/- 22% with 20 mg lisinopril plus 20 mg olmesartan medoxomil and 83% +/- 9% with 20 mg lisinopril plus 40 mg olmesartan medoxomil (P = .3 versus 80 mg olmesartan medoxomil alone). These data demonstrate that a higher dose of the long-acting ARB olmesartan medoxomil can produce an almost complete 24-hour blockade of the blood pressure response to exogenous angiotensin in normal subjects. Hence, a higher dose of a long-acting ARB is as effective as a lower dose of the same compound combined with an angiotensin-converting enzyme inhibitor in terms of blockade of the vascular effects of angiotensin.
Resumo:
Specific metabolic pathways are activated by different nutrients to adapt the organism to available resources. Although essential, these mechanisms are incompletely defined. Here, we report that medium-chain fatty acids contained in coconut oil, a major source of dietary fat, induce the liver ω-oxidation genes Cyp4a10 and Cyp4a14 to increase the production of dicarboxylic fatty acids. Furthermore, these activate all ω- and β-oxidation pathways through peroxisome proliferator activated receptor (PPAR) α and PPARγ, an activation loop normally kept under control by dicarboxylic fatty acid degradation by the peroxisomal enzyme L-PBE. Indeed, L-pbe(-/-) mice fed coconut oil overaccumulate dicarboxylic fatty acids, which activate all fatty acid oxidation pathways and lead to liver inflammation, fibrosis, and death. Thus, the correct homeostasis of dicarboxylic fatty acids is a means to regulate the efficient utilization of ingested medium-chain fatty acids, and its deregulation exemplifies the intricate relationship between impaired metabolism and inflammation.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
The weak selection approximation of population genetics has made possible the analysis of social evolution under a considerable variety of biological scenarios. Despite its extensive usage, the accuracy of weak selection in predicting the emergence of altruism under limited dispersal when selection intensity increases remains unclear. Here, we derive the condition for the spread of an altruistic mutant in the infinite island model of dispersal under a Moran reproductive process and arbitrary strength of selection. The simplicity of the model allows us to compare weak and strong selection regimes analytically. Our results demonstrate that the weak selection approximation is robust to moderate increases in selection intensity and therefore provides a good approximation to understand the invasion of altruism in spatially structured population. In particular, we find that the weak selection approximation is excellent even if selection is very strong, when either migration is much stronger than selection or when patches are large. Importantly, we emphasize that the weak selection approximation provides the ideal condition for the invasion of altruism, and increasing selection intensity will impede the emergence of altruism. We discuss that this should also hold for more complicated life cycles and for culturally transmitted altruism. Using the weak selection approximation is therefore unlikely to miss out on any demographic scenario that lead to the evolution of altruism under limited dispersal.
Resumo:
Excessive exposure to solar UV light is the main cause of skin cancers in humans. UV exposure depends on environmental as well as individual factors related to activity. Although outdoor occupational activities contribute significantly to the individual dose received, data on effective exposure are scarce and limited to a few occupations. A study was undertaken in order to assess effective short-term exposure among building workers and characterize the influence of individual and local factors on exposure. The effective exposure of construction workers in a mountainous area in the southern part of Switzerland was investigated through short-term dosimetry (97 dosimeters). Three altitudes, of about 500, 1500 and 2500 m were considered. Individual measurements over 20 working periods were performed using Spore film dosimeters on five body locations. The postural activity of workers was concomitantly recorded and static UV measurements were also performed. Effective exposure among building workers was high and exceeded occupational recommendations, for all individuals for at least one body location. The mean daily UV dose in plain was 11.9 SED (0.0-31.3 SED), in middle mountain 21.4 SED (6.6-46.8 SED) and in high mountain 28.6 SED (0.0-91.1 SED). Measured doses between workers and anatomical locations exhibited a high variability, stressing the role of local exposure conditions and individual factors. Short-term effective exposure ranged between 0 and 200% of ambient irradiation, indicating the occurrence of intense, subacute exposures. A predictive irradiation model was developed to investigate the role of individual factors. Posture and orientation were found to account for at least 38% of the total variance of relative individual exposure, and were also found to account more than altitude on the total variance of effective daily exposures. Targeted sensitization actions through professional information channels and specific prevention messages are recommended. Altitude outdoor workers should also benefit from preventive medical examination.
Resumo:
Tobacco-smoking prevalence has been decreasing in many high-income countries, but not in prison. We provide a summary of recent data on smoking in prison (United States, Australia, and Europe), and discuss examples of implemented policies for responding to environmental tobacco smoke (ETS), their health, humanitarian, and ethical aspects. We gathered data through a systematic literature review, and added the authors' ongoing experience in the implementation of smoking policies outside and inside prisons in Australia and Europe. Detainees' smoking prevalence varies between 64 per cent and 91.8 per cent, and can be more than three times as high as in the general population. Few data are available on the prevalence of smoking in women detainees and staff. Policies vary greatly. Bans may either be 'total' or 'partial' (smoking allowed in cells or designated places). A comprehensive policy strategy to reduce ETS needs a harm minimization philosophy, and should include environmental restrictions, information, and support to detainees and staff for smoking cessation, and health staff training in smoking cessation.
Resumo:
OBJECTIVE: To test the ability of a novel phase-shifting medium (PSM) to provide sustained distension of the uterine cavity and produce saline infusion sonography (SIS)-like images in a simplified contrast ultrasound procedure. DESIGN: Prospective pilot feasibility trial of a new diagnostic procedure, contrast ultrasound. SETTING: Clinical reproductive endocrine and infertility unit of regional teaching hospital. PATIENT(S): Twenty-six asymptomatic infertile women (group I) and 27 women presenting with dysfunctional uterine bleeding (DUB) who were scheduled for exploratory surgery (group II). INTERVENTION(S): All women who were temporarily on oral contraceptive first had a regular pelvic ultrasound followed by the intrauterine instillation of up to 3 mL PSM, using a regular insemination catheter, after which all instruments were removed and a regular ultrasound was performed again. RESULT(S): In all 53 women, intrauterine instillation of 1-3 mL PSM resulted in a 3-7 mm uterine distension, sufficient to produce SIS-like images of the uterine cavity that lasted 7-10 min. Contrast ultrasound revealed an endometrial polyp in 3 asymptomatic women of group I. In group II. 12 of 14 women (86%) whose vaginal ultrasound were positive or dubious had positive findings with contrast ultrasound; 9 of 12 patients whose vaginal ultrasounds were negative also had positive contrast ultrasound findings. All the positive and negative findings of contrast ultrasound made in group II were confirmed anatomically (sensitivity and specificity of 100%), whereas the correlation for standard vaginal ultrasound was markedly lower at 57.1% and 85.7%, respectively. Most patients (46 of 53) reported no discomfort during or after the procedure, and 7 women described the procedure as mildly uncomfortable. CONCLUSION(S): Contrast ultrasound, a novel simple diagnostic procedure conducted after intrauterine instillation of 1-3 mL PSM using a simple plastic catheter, delivered SIS-quality images in asymptomatic (group I) and symptomatic (group II) patients while retaining the simplicity of standard ultrasound. We therefore foresee broad application of contrast ultrasound for sensitive and specific assessment for uterine pathologies in the physician's office.