3 resultados para Ordered subsets – Expectation maximization (OS-EM)

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Deep Underground Neutrino Experiment (DUNE) is a long-baseline accelerator experiment designed to make a significant contribution to the study of neutrino oscillations with unprecedented sensitivity. The main goal of DUNE is the determination of the neutrino mass ordering and the leptonic CP violation phase, key parameters of the three-neutrino flavor mixing that have yet to be determined. An important component of the DUNE Near Detector complex is the System for on-Axis Neutrino Detection (SAND) apparatus, which will include GRAIN (GRanular Argon for Interactions of Neutrinos), a novel liquid Argon detector aimed at imaging neutrino interactions using only scintillation light. For this purpose, an innovative optical readout system based on Coded Aperture Masks is investigated. This dissertation aims to demonstrate the feasibility of reconstructing particle tracks and the topology of CCQE (Charged Current Quasi Elastic) neutrino events in GRAIN with such a technique. To this end, the development and implementation of a reconstruction algorithm based on Maximum Likelihood Expectation Maximization was carried out to directly obtain a three-dimensional distribution proportional to the energy deposited by charged particles crossing the LAr volume. This study includes the evaluation of the design of several camera configurations and the simulation of a multi-camera optical system in GRAIN.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Follicular lymphoma (FL) is a B cell neoplasm, composed of follicle center cells, that accounts for about 20% of all lymphomas, with the highest incidence reported in the USA and western Europe. FL has been considered a virtually incurable disease, with a high response rate alternated with frequent post-therapy relapses or progression towards more aggressive lymphomas. Due to the extreme variability in outcome, many efforts were made to predict prognosis, the need for therapy, and the likelihood of evolution. Even if clinical scores turned out to be robust and easy to use in clinical practice for patient risk stratification, marked heterogeneity in outcome remains within each group and further insights into the biology of FL are needed. The genome-wide approach underscored the pivotal role of the FL microenvironment in the evolution of the disease. In 2004, a landmark study by Dave et al. first described the microenvironment impact on tumor biology. By gene expression profiling they identified two different immune response signatures, involving T-cells and macrophages which seemed to independently predict FL outcome, but their exact is not completely understood and different studies led to variable results. Subsequently, many workgroups identified in amount and distribution pattern of these different cell subsets features which can impact prognosis, this leading to hypothesizing the use of these parameters as surrogate markers of the molecular signature. We aimed to assess the possible contributions of micro-environmental components to FL transformation or progression, its relevance as a prognostic/predictive tool, and its potential role as an innovative therapeutic target. We used immunohistochemical techniques, focusing specifically on macrophages and T-cells subsets, and then found correlations between the presence, proportions, and distribution of these reactive cells and the clinical outcomes leading to the future development of a reliable tool for upfront risk stratification of patients affected by FL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reinforcement Learning (RL) provides a powerful framework to address sequential decision-making problems in which the transition dynamics is unknown or too complex to be represented. The RL approach is based on speculating what is the best decision to make given sample estimates obtained from previous interactions, a recipe that led to several breakthroughs in various domains, ranging from game playing to robotics. Despite their success, current RL methods hardly generalize from one task to another, and achieving the kind of generalization obtained through unsupervised pre-training in non-sequential problems seems unthinkable. Unsupervised RL has recently emerged as a way to improve generalization of RL methods. Just as its non-sequential counterpart, the unsupervised RL framework comprises two phases: An unsupervised pre-training phase, in which the agent interacts with the environment without external feedback, and a supervised fine-tuning phase, in which the agent aims to efficiently solve a task in the same environment by exploiting the knowledge acquired during pre-training. In this thesis, we study unsupervised RL via state entropy maximization, in which the agent makes use of the unsupervised interactions to pre-train a policy that maximizes the entropy of its induced state distribution. First, we provide a theoretical characterization of the learning problem by considering a convex RL formulation that subsumes state entropy maximization. Our analysis shows that maximizing the state entropy in finite trials is inherently harder than RL. Then, we study the state entropy maximization problem from an optimization perspective. Especially, we show that the primal formulation of the corresponding optimization problem can be (approximately) addressed through tractable linear programs. Finally, we provide the first practical methodologies for state entropy maximization in complex domains, both when the pre-training takes place in a single environment as well as multiple environments.