79 resultados para Multistage stochastic linear programs
Resumo:
Abstract This paper shows how to calculate recursively the moments of the accumulated and discounted value of cash flows when the instantaneous rates of return follow a conditional ARMA process with normally distributed innovations. We investigate various moment based approaches to approximate the distribution of the accumulated value of cash flows and we assess their performance through stochastic Monte-Carlo simulations. We discuss the potential use in insurance and especially in the context of Asset-Liability Management of pension funds.
Resumo:
Linear IgA bullous dermatosis (LABD) is an autoimmune disease, characterized by linear deposition of IgA along the basement membrane zone. Drug-induced LABD is rare but increasing in frequency. A new case of drug-induced LABD associated with the administration of furosemide is described.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
We showed earlier how to predict the writhe of any rational knot or link in its ideal geometric configuration, or equivalently the average of the 3D writhe over statistical ensembles of random configurations of a given knot or link (Cerf and Stasiak 2000 Proc. Natl Acad. Sci. USA 97 3795). There is no general relation between the minimal crossing number of a knot and the writhe of its ideal geometric configuration. However, within individual families of knots linear relations between minimal crossing number and writhe were observed (Katritch et al 1996 Nature 384 142). Here we present a method that allows us to express the writhe as a linear function of the minimal crossing number within Conway families of knots and links in their ideal configuration. The slope of the lines and the shift between any two lines with the same
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
We present here a nonbiased probabilistic method that allows us to consistently analyze knottedness of linear random walks with up to several hundred noncorrelated steps. The method consists of analyzing the spectrum of knots formed by multiple closures of the same open walk through random points on a sphere enclosing the walk. Knottedness of individual "frozen" configurations of linear chains is therefore defined by a characteristic spectrum of realizable knots. We show that in the great majority of cases this method clearly defines the dominant knot type of a walk, i.e., the strongest component of the spectrum. In such cases, direct end-to-end closure creates a knot that usually coincides with the knot type that dominates the random closure spectrum. Interestingly, in a very small proportion of linear random walks, the knot type is not clearly defined. Such walks can be considered as residing in a border zone of the configuration space of two or more knot types. We also characterize the scaling behavior of linear random knots.
Resumo:
The role of busulfan (Bu) metabolites in the adverse events seen during hematopoietic stem cell transplantation and in drug interactions is not explored. Lack of availability of established analytical methods limits our understanding in this area. The present work describes a novel gas chromatography-tandem mass spectrometric assay for the analysis of sulfolane (Su) in plasma of patients receiving high-dose Bu. Su and Bu were extracted from a single 100 μL plasma sample by liquid-liquid extraction. Bu was separately derivatized with 2,3,5,6-tetrafluorothiophenolfluorinated agent. Mass spectrometric detection of the analytes was performed in the selected reaction monitoring mode on a triple quadrupole instrument after electronic impact ionization. Bu and Su were analyzed with separate chromatographic programs, lasting 5 min each. The assay for Su was found to be linear in the concentration range of 20-400 ng/mL. The method has satisfactory sensitivity (lower limit of quantification, 20 ng/mL) and precision (relative standard deviation less than 15 %) for all the concentrations tested with a good trueness (100 ± 5 %). This method was applied to measure Su from pediatric patients with samples collected 4 h after dose 1 (n = 46), before dose 7 (n = 56), and after dose 9 (n = 54) infusions of Bu. Su (mean ± SD) was detectable in plasma of patients 4 h after dose 1, and higher levels were observed after dose 9 (249.9 ± 123.4 ng/mL). This method may be used in clinical studies investigating the role of Su on adverse events and drug interactions associated with Bu therapy.
Resumo:
Training is a crucial tool for building the capacity necessary for prevention and control of cardiovascular diseases (CVDs) in developing countries. This paper summarizes some features of a 2-week workshop aimed at enabling local health professionals to initiate a comprehensive CVD prevention and control program in a context of limited resources. The workshops have been organized in the regions where CVD prevention programs are being contemplated, in cooperation with health authorities of the concerned regions. The workshop's content includes a broad variety of issues related to CVD prevention and control, and to program development. Strong emphasis is placed on "learning by doing," and groups of 5-6 participants conduct a small-scale epidemiological study during the first week; during the second week, they draft a virtual program of CVD prevention and control adapted to the local situation. This practice-oriented workshop focuses on building expertise among anticipated key players, strengthening networks among relevant health professionals, and advocating the urgent need to tackle the emerging CVD epidemic in developing countries.
Resumo:
BACKGROUND: In Switzerland, intravenous drug use (IDU) accounts for 80% of newly acquired hepatitis C virus (HCV) infections. Early HCV treatment has the potential to interrupt the transmission chain and reduce morbidity/mortality due to decompensated liver cirrhosis and hepatocellular carcinoma. Nevertheless, patients in drug substitution programs are often insufficiently screened and treated. OBJECTIVE/METHODS: With the aim to improve HCV management in IDUs, we conducted a cross sectional chart review in three opioid substitution programs in St. Gallen (125 methadone and 71 heroin recipients). Results were compared with another heroin substitution program in Bern (202 patients) and SCCS/SHCS data. RESULTS: Among the methadone/heroin recipients in St. Gallen, diagnostic workup of HCV was better than expected: HCV/HIV-status was unknown in only 1% (2/196), HCV RNA was not performed in 9% (13/146) of anti-HCV-positives and the genotype missing in 15% (12/78) of HCV RNA-positives. In those without spontaneous clearance (two thirds), HCV treatment uptake was 23% (21/91) (HIV-: 29% (20/68), HIV+: 4% (1/23)), which was lower than in methadone/heroin recipients and particularly non-IDUs within the SCCS/SHCS, but higher than in the, mainly psychiatrically focussed, heroin substitution program in Bern (8%). Sustained virological response (SVR) rates were comparable in all settings (overall: 50%, genotype 1: 35-40%, genotype 3: two thirds). In St. Gallen, the median delay from the estimated date of infection (IDU start) to first diagnosis was 10 years and to treatment was another 7.5 years. CONCLUSIONS: Future efforts need to focus on earlier HCV diagnosis and improvement of treatment uptake among patients in drug substitution programs, particularly if patients are HIV-co-infected. New potent drugs might facilitate the decision to initiate treatment.
Resumo:
In addition to the importance of sample preparation and extract separation, MS detection is a key factor in the sensitive quantification of large undigested peptides. In this article, a linear ion trap MS (LIT-MS) and a triple quadrupole MS (TQ-MS) have been compared in the detection of large peptides at subnanomolar concentrations. Natural brain natriuretic peptide, C-peptide, substance P and D-Junk-inhibitor peptide, a full D-amino acid therapeutic peptide, were chosen. They were detected by ESI and simultaneous MS(1) and MS(2) acquisitions. With direct peptide infusion, MS(2) spectra revealed that fragmentation was peptide dependent, milder on the LIT-MS and required high collision energies on the TQ-MS to obtain high-intensity product ions. Peptide adsorption on surfaces was overcome and peptide dilutions ranging from 0.1 to 25 nM were injected onto an ultra high-pressure LC system with a 1 mm id analytical column and coupled with the MS instruments. No difference was observed between the two instruments when recording in LC-MS(1) acquisitions. However, in LC-MS(2) acquisitions, a better sensitivity in the detection of large peptides was observed with the LIT-MS. Indeed, with the three longer peptides, the typical fragmentation in the TQ-MS resulted in a dramatic loss of sensitivity (> or = 10x).