23 resultados para Process models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Logistics involves planning, managing, and organizing the flows of goods from the point of origin to the point of destination in order to meet some requirements. Logistics and transportation aspects are very important and represent a relevant costs for producing and shipping companies, but also for public administration and private citizens. The optimization of resources and the improvement in the organization of operations is crucial for all branches of logistics, from the operation management to the transportation. As we will have the chance to see in this work, optimization techniques, models, and algorithms represent important methods to solve the always new and more complex problems arising in different segments of logistics. Many operation management and transportation problems are related to the optimization class of problems called Vehicle Routing Problems (VRPs). In this work, we consider several real-world deterministic and stochastic problems that are included in the wide class of the VRPs, and we solve them by means of exact and heuristic methods. We treat three classes of real-world routing and logistics problems. We deal with one of the most important tactical problems that arises in the managing of the bike sharing systems, that is the Bike sharing Rebalancing Problem (BRP). We propose models and algorithms for real-world earthwork optimization problems. We describe the 3DP process and we highlight several optimization issues in 3DP. Among those, we define the problem related to the tool path definition in the 3DP process, the 3D Routing Problem (3DRP), which is a generalization of the arc routing problem. We present an ILP model and several heuristic algorithms to solve the 3DRP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: The orthotopic left lung transplantation model in rats has been developed to answer a variety of scientific questions in transplant immunology and in the related fields of respiratory diseases. However, its widespread use has been hampered by the complexity of the procedure. AIM OF THE RESEARCH: Our purpose is to provide a detailed description of the procedure of this technique, including the complications and difficulties from the very first microsurgical step until the ultimate successful completion of the transplant procedure. MATERIALS AND METHODS: The transplant procedures were performed by two collaborating transplant surgeons with microsurgical and thoracic surgery skills. A total of 150 left lung transplants in rats were performed. Twenty-seven syngeneic (Lewis to Lewis) and 123 allogeneic (Brown-Norway to Lewis) lung transplants were performed using the cuff technique. RESULTS: In first 50 transplant procedures, post-transplant survival rate was 74% of which 54% reached the end-point of 3 or 7 days post-transplant; whole complication rate was 66%. In the subsequent 50 transplant surgeries (from 51 to 100) post-transplant survival rate increased to 88% of which 56% reached the end-point; whole complication rate was 32 %. In the final 50 transplants (from 101 to 150) post-transplant survival rate was confirmed to be 88% of which 74% reached the end-point; whole complication rate was again 32 %. CONCLUSIONS: One hundred-fifty transplants can represent a reasonable number of procedures to obtain a satisfactory surgical outcome. Training period with simpler animal models is mandatory to develop anesthesiological and microsurgical skills required for successfully develop this model. The collaboration between at least two microsurgeons is mandatory to perform all the simultaneous procedures required for completing the transplant surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bone disorders have severe impact on body functions and quality life, and no satisfying therapies exist yet. The current models for bone disease study are scarcely predictive and the options existing for therapy fail for complex systems. To mimic and/or restore bone, 3D printing/bioprinting allows the creation of 3D structures with different materials compositions, properties, and designs. In this study, 3D printing/bioprinting has been explored for (i) 3D in vitro tumor models and (ii) regenerative medicine. Tumor models have been developed by investigating different bioinks (i.e., alginate, modified gelatin) enriched by hydroxyapatite nanoparticles to increase printing fidelity and increase biomimicry level, thus mimicking the organic and inorganic phase of bone. High Saos-2 cell viability was obtained, and the promotion of spheroids clusters as occurring in vivo was observed. To develop new syntethic bone grafts, two approaches have been explored. In the first, novel magnesium-phosphate scaffolds have been investigated by extrusion-based 3D printing for spinal fusion. 3D printing process and parameters have been optimized to obtain custom-shaped structures, with competent mechanical properties. The 3D printed structures have been combined to alginate porous structures created by a novel ice-templating technique, to be loaded by antibiotic drug to address infection prevention. Promising results in terms of planktonic growth inhibition was obtained. In the second strategy, marine waste precursors have been considered for the conversion in biogenic HA by using a mild-wet conversion method with different parameters. The HA/carbonate ratio conversion efficacy was analysed for each precursor (by FTIR and SEM), and the best conditions were combined to alginate to develop a composite structure. The composite paste was successfully employed in custom-modified 3D printer for the obtainment of 3D printed stable scaffolds. In conclusion, the osteomimetic materials developed in this study for bone models and synthetic grafts are promising in bone field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main topic of this thesis is confounding in linear regression models. It arises when a relationship between an observed process, the covariate, and an outcome process, the response, is influenced by an unmeasured process, the confounder, associated with both. Consequently, the estimators for the regression coefficients of the measured covariates might be severely biased, less efficient and characterized by misleading interpretations. Confounding is an issue when the primary target of the work is the estimation of the regression parameters. The central point of the dissertation is the evaluation of the sampling properties of parameter estimators. This work aims to extend the spatial confounding framework to general structured settings and to understand the behaviour of confounding as a function of the data generating process structure parameters in several scenarios focusing on the joint covariate-confounder structure. In line with the spatial statistics literature, our purpose is to quantify the sampling properties of the regression coefficient estimators and, in turn, to identify the most prominent quantities depending on the generative mechanism impacting confounding. Once the sampling properties of the estimator conditionally on the covariate process are derived as ratios of dependent quadratic forms in Gaussian random variables, we provide an analytic expression of the marginal sampling properties of the estimator using Carlson’s R function. Additionally, we propose a representative quantity for the magnitude of confounding as a proxy of the bias, its first-order Laplace approximation. To conclude, we work under several frameworks considering spatial and temporal data with specific assumptions regarding the covariance and cross-covariance functions used to generate the processes involved. This study allows us to claim that the variability of the confounder-covariate interaction and of the covariate plays the most relevant role in determining the principal marker of the magnitude of confounding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural events are a widely recognized hazard for industrial sites where relevant quantities of hazardous substances are handled, due to the possible generation of cascading events resulting in severe technological accidents (Natech scenarios). Natural events may damage storage and process equipment containing hazardous substances, that may be released leading to major accident scenarios called Natech events. The need to assess the risk associated with Natech scenarios is growing and methodologies were developed to allow the quantification of Natech risk, considering both point sources and linear sources as pipelines. A key element of these procedures is the use of vulnerability models providing an estimation of the damage probability of equipment or pipeline segment as a result of the impact of the natural event. Therefore, the first aim of the PhD project was to outline the state of the art of vulnerability models for equipment and pipelines subject to natural events such as floods, earthquakes, and wind. Moreover, the present PhD project also aimed at the development of new vulnerability models in order to fill some gaps in literature. In particular, a vulnerability model for vertical equipment subject to wind and to flood were developed. Finally, in order to improve the calculation of Natech risk for linear sources an original methodology was developed for Natech quantitative risk assessment methodology for pipelines subject to earthquakes. Overall, the results obtained are a step forward in the quantitative risk assessment of Natech accidents. The tools developed open the way to the inclusion of new equipment in the analysis of Natech events, and the methodology for the assessment of linear risk sources as pipelines provides an important tool for a more accurate and comprehensive assessment of Natech risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancers of unknown primary site (CUPs) are a rare group of metastatic tumours, with a frequency of 3-5%, with an overall survival of 6-10 month. The identification of tumour primary site is usually reached by a combination of diagnostic investigations and immunohistochemical testing of the tumour tissue. In CUP patients, these investigations are inconclusive. Since international guidelines for treatment are based on primary site indication, CUP treatment requires a blind approach. As a consequence, CUPs are usually empiric treated with poorly effective. In this study, we applied a set of microRNAs using EvaGreen-based Droplet Digital PCR in a retrospective and prospective collection of formalin-fixed paraffin-embedded tissue samples. We assessed miRNA expression of 155 samples including primary tumours (N=94), metastases of known origin (N=10) and metastases of unknown origin (N=50). Then, we applied the shrunken centroids predictive algorithm to obtain the CUP’s site(s)-of-origin. The molecular test was successfully applied to all CUP samples and provided a site-of-origin identification for all samples, potentially within a one-week time frame from sample inclusion. In the second part of the study we derived two CUP cell lines, and corresponding patient-derived xenografts (PDXs). CUP cell lines and PDXs underwent histological, molecular, and genomic characterization confirming the features of the original tumour. Tissues-of-origin prediction was obtained from the tumour microRNA expression profile and confirmed by single cell RNA sequencing. Genomic testing analysis identified FGFR2 amplification in both models. Drug-screening assays were performed to test the activity of FGFR2-targeting drug and the combination treatment with the MEK inhibitor trametinib, which proved to be synergic and exceptionally active, both in vitro and in vivo. In conclusion, our study demonstrated that miRNA expression profiling could be employed as diagnostic test. Then we successfully derived two CUP models from patients, used for therapy tests, bringing personalized therapy closer to CUP patients.