3 resultados para Mean Transit Time
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Questa tesi ha l’obbiettivo di studiare e seguire la creazione un modello matematico che possa risolvere un problema logistico di Hub Facility Location reale, per l’individuazione del posizionamento ottimale di uno o più depositi all’interno di una rete distributiva europea e per l’assegnazione dei rispettivi clienti. Si fa riferimento alla progettazione della rete logistica per rispondere alle necessità del cliente, relativamente ad una domanda multiprodotto. Questo problema è stato studiato a partire da un caso reale aziendale per la valutazione della convenienza nella sostituzione di quattro magazzini locali con uno/due hub logistici che possano servire tutte le aree. Il modello distributivo può anche essere adoperato per valutare l’effetto della variazione, dal punto di vista economico, del servizio di trasporto e di tariffario. La determinazione della posizione ottimale e del numero dei magazzini avviene tramite un modello matematico che considera al proprio interno sia costi fissi relativi alla gestione dei magazzini (quindi costo di stabilimento, personale e giacenza) e sia i costi relativi al trasporto e alla spedizione dei prodotti sulle diverse aree geografiche. In particolare, la formulazione matematica si fonda su un modello Programmazione Lineare Intera, risolto in tempi molto brevi attraverso un software di ottimizzazione, nonostante la grande mole di dati in input del problema. In particolare, si ha lo studio per l’integrazione di tariffari di trasporto diversi e delle economie di scala per dare consistenza ad un modello teorico. Inoltre, per ricercare la migliore soluzione di quelle ottenute sono poi emersi altri fattori oltre a quello economico, ad esempio il tempo di trasporto (transit-time) che è un fattore chiave per ottenere la soddisfazione e la fedeltà del cliente e attitudine dell’area geografica ad accogliere una piattaforma logistica, con un occhio sugli sviluppi futuri.
A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series
Resumo:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.