11 resultados para decomposition microenvironment
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
Microenvironment in bone tumors is a dynamic entity composed of cells from different origins (immune cells, stromal cells, mesenchymal stem cells, endothelial cells, pericytes) and vascular structures surrounded by a matrix of different nature (bone, cartilage, myxoid). Interactions between cancer cells and tumor microenvironment (TME) are complex and can change as tumor progress, but are also crucial in determining response to cancer therapies. Chondrosarcoma is the second most frequent bone cancer in adult age, but its treatment still represents a challenge, for the intrinsic resistance to conventional chemotherapy and radiation therapy. This resistance is mainly due to pathological features, as dense matrix, scarce mitoses and poor vascularization, sustained by biological mechanisms only partially delucidated. Somatic mutation in the Krebs cycle enzyme isocytrate dehydrogenase (IDH) have been described in gliomas, acute myeloid leukemia, cholangiocarcinoma, melanoma, colorectal, prostate cancer, thyroid carcinoma and other cancers. In mesenchymal tumors IDH mutations are present in about 50% of central chondrosarcoma. IDH mutations are an early event in chondrosarcoma-genesis, and contribute to the acquisition of malignancy through the block of cellular differentiation, hypoxia induction through HIF stabilization, DNA methylation and alteration of cellular red-ox balance. While in gliomas IDH mutations confers a good prognosis, in chondrosarcoma IDH prognostic role is controversial in different reported series. First aim of this project is to define the prevalence and the prognostic role of IDH mutation in high grade central conventional chondrosarcoma patients treated at Istituto Ortopedico Rizzoli. Second aim is the critical revision of scientific literature to understand better how a genomic event in cancer cell can trigger alteration in the TME, through immune infiltrate reshaping, angiogenesis induction, metabolic and methylation rewiring. Third aim is to screen other sarcoma histotypes for the presence of IDH mutation.
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
Follicular lymphoma (FL) is a B cell neoplasm, composed of follicle center cells, that accounts for about 20% of all lymphomas, with the highest incidence reported in the USA and western Europe. FL has been considered a virtually incurable disease, with a high response rate alternated with frequent post-therapy relapses or progression towards more aggressive lymphomas. Due to the extreme variability in outcome, many efforts were made to predict prognosis, the need for therapy, and the likelihood of evolution. Even if clinical scores turned out to be robust and easy to use in clinical practice for patient risk stratification, marked heterogeneity in outcome remains within each group and further insights into the biology of FL are needed. The genome-wide approach underscored the pivotal role of the FL microenvironment in the evolution of the disease. In 2004, a landmark study by Dave et al. first described the microenvironment impact on tumor biology. By gene expression profiling they identified two different immune response signatures, involving T-cells and macrophages which seemed to independently predict FL outcome, but their exact is not completely understood and different studies led to variable results. Subsequently, many workgroups identified in amount and distribution pattern of these different cell subsets features which can impact prognosis, this leading to hypothesizing the use of these parameters as surrogate markers of the molecular signature. We aimed to assess the possible contributions of micro-environmental components to FL transformation or progression, its relevance as a prognostic/predictive tool, and its potential role as an innovative therapeutic target. We used immunohistochemical techniques, focusing specifically on macrophages and T-cells subsets, and then found correlations between the presence, proportions, and distribution of these reactive cells and the clinical outcomes leading to the future development of a reliable tool for upfront risk stratification of patients affected by FL.
Resumo:
Noise is constant presence in measurements. Its origin is related to the microscopic properties of matter. Since the seminal work of Brown in 1828, the study of stochastic processes has gained an increasing interest with the development of new mathematical and analytical tools. In the last decades, the central role that noise plays in chemical and physiological processes has become recognized. The dual role of noise as nuisance/resource pushes towards the development of new decomposition techniques that divide a signal into its deterministic and stochastic components. In this thesis I show how methods based on Singular Spectrum Analysis have the right properties to fulfil the previously mentioned requirement. During my work I applied SSA to different signals of interest in chemistry: I developed a novel iterative procedure for the denoising of powder X-ray diffractograms; I “denoised” bi-dimensional images from experiments of electrochemiluminescence imaging of micro-beads obtaining new insight on ECL mechanism. I also used Principal Component Analysis to investigate the relationship between brain electrophysiological signals and voice emission.
Resumo:
Primary myelofibrosis(PMF) is the most severe form of Philadelphia-negative myeloproliferative neoplasms(MPNs), characterized by splenomegaly, extramedullary hematopoiesis and bone marrow(BM) fibrosis, with disease progression to leukemia and low survival. The best therapy currently available includes treatment with a JAK inhibitor(Ruxolitinib), which only ameliorates symptoms. Unfortunately, the pathogenesis of the disease is still poorly understood. It has been hypothesized that its progression may be determined by the presence of inflammatory cytokines produced by the bone marrow microenvironment that promote fibrosis. The three aims of this PhD thesis, using the Gata1low mouse model of myelofibrosis, were: 1. Investigate the presence of different cytokines in the bone marrow microenvironment; 2. Test the efficacy of treatment with Reparixin, a CXCR1/2 receptor inhibitor; 3. Test the efficacy of treatment with RB40.34 (P-selectin inhibitor), alone and in combination with Ruxolitinib. In the first study, we demonstrated by immunohistochemistry(IHC) the presence in the BM of Gata1low mice of elevated levels of CXCL1, and its receptors CXCR1/2, and TGF-β1. Particularly, the cells with higher expression of these cytokines were the megakaryocytes. In the second study, we found that treatment with Reparixin in Gata1low mice showed dose-dependent efficacy in reducing bone marrow and splenic fibrosis. Furthermore, by IHC analysis we demonstrated that the treatment induced a decrease in the expression of TGF-β1. In the third study, we found that treatment with RB40.34 in combination with Ruxolitinib normalizes the phenotype of Gata1low mice, reducing fibrosis and the content of TGF-β and CXCL1 in the bone marrow, and restoring the architecture of hematopoiesis in the bone marrow and spleen. In summary, these data provide preclinical evidence that treatment with Reparixin and RB40.34 in combination with Ruxolitinib are effective on reversing the myelofibrotic trait in the Gata1low mouse model and encourage clinical trials to validate these compounds in human patients with PMF.
Resumo:
Mucosal melanoma of the head and neck region (MM-H&N) is a rare disease, characterized by a poor prognosis and limited therapeutic strategies, especially regarding targeted therapy (lower rate of targetable mutations compared to cutaneous melanoma) and immunotherapy (lack of diagnostic tools able to predict the response). Meanwhile, bright-field multiplex immunohistochemistry (BF-mIHC) is emerging as a promising tool for characterizing tumor microenvironment (TME) and predicting response to immunotherapy in several tumors, including melanoma. This PhD project aims to develop a BF-mIHC protocol to evaluate the TME in MM-H&N, analyze the correlation between immune markers/immune profiles and MM-H&N features (clinicopathologic and molecular), and find new biomarkers useful for prognostic-therapeutic stratification of these patients. Specific aims are: (I) describe the clinicopathological features of MM-H&N; (II) analyze the molecular status of MM-H&N and correlate it with the clinicopathological features; (III) analyze the molecular status of multiple specimens from the same patient to verify whether molecular heterogeneity of MM-H&N could affect the results with relevant prognostic-therapeutic implications; (IV) develop a BF-mIHC protocol to study TME in MM-H&N; (V) analyze the correlation between immune markers/immune profiles and MM-H&N features (clinicopathologic and molecular) to test whether BF-mIHC could be a promising tool for prognostic-therapeutic characterization of these patients.
Resumo:
The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.
Resumo:
In this thesis, the viability of the Dynamic Mode Decomposition (DMD) as a technique to analyze and model complex dynamic real-world systems is presented. This method derives, directly from data, computationally efficient reduced-order models (ROMs) which can replace too onerous or unavailable high-fidelity physics-based models. Optimizations and extensions to the standard implementation of the methodology are proposed, investigating diverse case studies related to the decoding of complex flow phenomena. The flexibility of this data-driven technique allows its application to high-fidelity fluid dynamics simulations, as well as time series of real systems observations. The resulting ROMs are tested against two tasks: (i) reduction of the storage requirements of high-fidelity simulations or observations; (ii) interpolation and extrapolation of missing data. The capabilities of DMD can also be exploited to alleviate the cost of onerous studies that require many simulations, such as uncertainty quantification analysis, especially when dealing with complex high-dimensional systems. In this context, a novel approach to address parameter variability issues when modeling systems with space and time-variant response is proposed. Specifically, DMD is merged with another model-reduction technique, namely the Polynomial Chaos Expansion, for uncertainty quantification purposes. Useful guidelines for DMD deployment result from the study, together with the demonstration of its potential to ease diagnosis and scenario analysis when complex flow processes are involved.