929 resultados para Free-ion Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fabry disease (FD), X-linked metabolic disorder caused by a deficiency in α-galactosidase A activity, leads to the accumulation of glycosphingolipids, mainly Gb3 and lyso-Gb3, in several organs. Gastrointestinal (GI) symptoms are among the earliest and most common, strongly impacting patients’ quality of life. However, the origin of these symptoms and the exact mechanisms of pathogenesis are still poorly understood, thus the pressing need to improve their knowledge. Here we aimed to evaluate whether a FD murine model (α-galactosidase A Knock-Out) captures the functional GI issues experienced by patients. In particular, the potential mechanisms involved in the development and maintenance of GI symptoms were explored by looking at the microbiota-gut-brain axis involvement. Moreover, we sought to examine the effects of lyso-Gb3 on colonic contractility and the intestinal epithelium and the enteric nervous system, which together play important roles in regulating intestinal ion transport and fluid and electrolyte homeostasis. Fabry mice revealed visceral hypersensitivity and a diarrhea-like phenotype accompanied by anxious-like behavior and reduced locomotor activity. They reported also an imbalance of SCFAs and an early compositional and functional dysbiosis of the gut microbiota, which partly persisted with advancing age. Moreover, overexpression of TRPV1 was found in affected mice, and partial alteration of TRPV4 and TRPA1 as well, identifying them as possible therapeutic targets. The Ussing chamber results after treatment with lyso-Gb3 showed an increase in Isc (likely mediated by HCO3- ions movement) which affects neuron-mediated secretion, especially capsaicin- and partly veratridine-mediated. This first characterization of gut-brain axis dysfunction in FD mouse provides functional validation of the model, suggesting new targets and possible therapeutic approaches. Furthermore, lyso-Gb3 is confirmed to be not only a marker for the diagnosis and follow-up of FD but also a possible player in the alteration of the FD colonic ion transport process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground deformation provides valuable insights on subsurface processes with pattens reflecting the characteristics of the source at depth. In active volcanic sites displacements can be observed in unrest phases; therefore, a correct interpretation is essential to assess the hazard potential. Inverse modeling is employed to obtain quantitative estimates of parameters describing the source. However, despite the robustness of the available approaches, a realistic imaging of these reservoirs is still challenging. While analytical models return quick but simplistic results, assuming an isotropic and elastic crust, more sophisticated numerical models, accounting for the effects of topographic loads, crust inelasticity and structural discontinuities, require much higher computational effort and information about the crust rheology may be challenging to infer. All these approaches are based on a-priori source shape constraints, influencing the solution reliability. In this thesis, we present a new approach aimed at overcoming the aforementioned limitations, modeling sources free of a-priori shape constraints with the advantages of FEM simulations, but with a cost-efficient procedure. The source is represented as an assembly of elementary units, consisting in cubic elements of a regular FE mesh loaded with a unitary stress tensors. The surface response due to each of the six stress tensor components is computed and linearly combined to obtain the total displacement field. In this way, the source can assume potentially any shape. Our tests prove the equivalence of the deformation fields due to our assembly and that of corresponding cavities with uniform boundary pressure. Our ability to simulate pressurized cavities in a continuum domain permits to pre-compute surface responses, avoiding remeshing. A Bayesian trans-dimensional inversion algorithm implementing this strategy is developed. 3D Voronoi cells are used to sample the model domain, selecting the elementary units contributing to the source solution and those remaining inactive as part of the crust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background There is a wide variation of recurrence risk of Non-small-cell lung cancer (NSCLC) within the same Tumor Node Metastasis (TNM) stage, suggesting that other parameters are involved in determining this probability. Radiomics allows extraction of quantitative information from images that can be used for clinical purposes. The primary objective of this study is to develop a radiomic prognostic model that predicts a 3 year disease free-survival (DFS) of resected Early Stage (ES) NSCLC patients. Material and Methods 56 pre-surgery non contrast Computed Tomography (CT) scans were retrieved from the PACS of our institution and anonymized. Then they were automatically segmented with an open access deep learning pipeline and reviewed by an experienced radiologist to obtain 3D masks of the NSCLC. Images and masks underwent to resampling normalization and discretization. From the masks hundreds Radiomic Features (RF) were extracted using Py-Radiomics. Hence, RF were reduced to select the most representative features. The remaining RF were used in combination with Clinical parameters to build a DFS prediction model using Leave-one-out cross-validation (LOOCV) with Random Forest. Results and Conclusion A poor agreement between the radiologist and the automatic segmentation algorithm (DICE score of 0.37) was found. Therefore, another experienced radiologist manually segmented the lesions and only stable and reproducible RF were kept. 50 RF demonstrated a high correlation with the DFS but only one was confirmed when clinicopathological covariates were added: Busyness a Neighbouring Gray Tone Difference Matrix (HR 9.610). 16 clinical variables (which comprised TNM) were used to build the LOOCV model demonstrating a higher Area Under the Curve (AUC) when RF were included in the analysis (0.67 vs 0.60) but the difference was not statistically significant (p=0,5147).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonostante lo scetticismo di molti studiosi circa la possibilità di prevedere l'andamento della borsa valori, esistono svariate teorie ipotizzanti la possibilità di utilizzare le informazioni conosciute per predirne i movimenti futuri. L’avvento dell’intelligenza artificiale nella seconda parte dello scorso secolo ha permesso di ottenere risultati rivoluzionari in svariati ambiti, tanto che oggi tale disciplina trova ampio impiego nella nostra vita quotidiana in molteplici forme. In particolare, grazie al machine learning, è stato possibile sviluppare sistemi intelligenti che apprendono grazie ai dati, riuscendo a modellare problemi complessi. Visto il successo di questi sistemi, essi sono stati applicati anche all’arduo compito di predire la borsa valori, dapprima utilizzando i dati storici finanziari della borsa come fonte di conoscenza, e poi, con la messa a punto di tecniche di elaborazione del linguaggio naturale umano (NLP), anche utilizzando dati in linguaggio naturale, come il testo di notizie finanziarie o l’opinione degli investitori. Questo elaborato ha l’obiettivo di fornire una panoramica sull’utilizzo delle tecniche di machine learning nel campo della predizione del mercato azionario, partendo dalle tecniche più elementari per arrivare ai complessi modelli neurali che oggi rappresentano lo stato dell’arte. Vengono inoltre formalizzati il funzionamento e le tecniche che si utilizzano per addestrare e valutare i modelli di machine learning, per poi effettuare un esperimento in cui a partire da dati finanziari e soprattutto testuali si tenterà di predire correttamente la variazione del valore dell’indice di borsa S&P 500 utilizzando un language model basato su una rete neurale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the upcoming years, various upgrades and improvements are planned for the CERN Large Hadron Collider (LHC) and represent the mandate of the High-Luminosity project. The upgrade will allow for a total stored beam energy of about 700 MJ, which will need, among others, an extremely efficient collimation system. This will be achieved with the addition of a hollow electron lens (HEL) system to help control the beam-halo depletion and mitigate the effects of fast beam losses. In this master thesis, we present a diffusion model of the HEL for HL-LHC. In particular, we explore several scenarios to use such a device, focusing on the halo depletion efficiency given by different noise regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this dissertation is to present the sequence of events which brought the scientific community of the early 20th century to conceive an expanding Universe born from a single origin. Among the facts here reported, some are well-known, some others instead are little-known backstories, not so easy neither to obtain nor to trust. Indeed, several matters shown in this thesis, now as then, create a battleground among scientists. Amid the numerous personalities whose contributions are discussed in this work, the main protagonist is surely Georges Lemaître, who managed to combine – without overlapping – his being both a priest and a scientist. The first chapter is dedicated to his biography, from his childhood in Belgium, to his early adulthood between England and the USA, to his success in the scientific community. The second and the third chapter explain how the race to the understanding of a Universe which not only expands, but also originated from a singularity, developed. The Belgian priest’s discoveries, as shown, were challenged by other important scientists, who, in several cases, Lemaître had a friendly relationship with. As a consequence, the fourth and final chapter deals with the multiple relations that the priest managed to build, thanks to his politeness and kindness. Moreover, it is also covered Lemaître’s personal connection with the Church and religion, without forgetting the personalities that influenced him – above all, Saint Thomas Aquinas. As a conclusion to this thesis, two appendices gather not only a summary of Lemaître’s works which are not already described in the chapters, but also the biographies of all the characters presented in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we state the collision avoidance problem as a vertex covering problem, then we consider a distributed framework in which a team of cooperating Unmanned Vehicles (UVs) aim to solve this optimization problem cooperatively to guarantee collision avoidance between group members. For this purpose, we implement a distributed control scheme based on a robust Set-Theoretic Model Predictive Control ( ST-MPC) strategy, where the problem involves vehicles with independent dynamics but with coupled constraints, to capture required cooperative behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the industry of steelmaking, the process of galvanizing is a treatment which is applied to protect the steel from corrosion. The air knife effect (AKE) occurs when nozzles emit a steam of air on the surfaces of a steel strip to remove excess zinc from it. In our work we formalized the problem to control the AKE and we implemented, with the R&D dept.of MarcegagliaSPA, a DL model able to drive the AKE. We call it controller. It takes as input the tuple : a tuple of the physical conditions of the process line (t,h,s) with the target value of the zinc coating (c); and generates the expected tuple of (pres and dist) to drive the mechanical nozzles towards the (c). According to the requirements we designed the structure of the network. We collected and explored the data set of the historical data of the smart factory. Finally, we designed the loss function as sum of three components: the minimization between the coating addressed by the network and the target value we want to reach; and two weighted minimization components for both pressure and distance. In our solution we construct a second module, named coating net, to predict the coating of zinc resulting from the AKE when the conditions are applied to the prod. line. Its structure is made by a linear and a deep nonlinear “residual” component learned by empirical observations. The predictions made by the coating nets are used as ground truth in the loss function of the controller. By tuning the weights of the different components of the loss function, it is possible to train models with slightly different optimization purposes. In the tests we compared the regularization of different strategies with the standard one in condition of optimal estimation for both; the overall accuracy is ± 3 g/m^2 dal target for all of them. Lastly, we analyze how the controller modeled the current solutions with the new logic: the sub-optimal values of pres and dist can be optimize of 50% and 20%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intermediate-complexity general circulation models are a fundamental tool to investigate the role of internal and external variability within the general circulation of the atmosphere and ocean. The model used in this thesis is an intermediate complexity atmospheric general circulation model (SPEEDY) coupled to a state-of-the-art modelling framework for the ocean (NEMO). We assess to which extent the model allows a realistic simulation of the most prominent natural mode of variability at interannual time scales: El-Niño Southern Oscillation (ENSO). To a good approximation, the model represents the ENSO-induced Sea Surface Temperature (SST) pattern in the equatorial Pacific, despite a cold tongue-like bias. The model underestimates (overestimates) the typical ENSO spatial variability during the winter (summer) seasons. The mid-latitude response to ENSO reveals that the typical poleward stationary Rossby wave train is reasonably well represented. The spectral decomposition of ENSO features a spectrum that lacks periodicity at high frequencies and is overly periodic at interannual timescales. We then implemented an idealised transient mean state change in the SPEEDY model. A warmer climate is simulated by an alteration of the parametrized radiative fluxes that corresponds to doubled carbon dioxide absorptivity. Results indicate that the globally averaged surface air temperature increases of 0.76 K. Regionally, the induced signal on the SST field features a significant warming over the central-western Pacific and an El-Niño-like warming in the subtropics. In general, the model features a weakening of the tropical Walker circulation and a poleward expansion of the local Hadley cell. This response is also detected in a poleward rearrangement of the tropical convective rainfall pattern. The model setting that has been here implemented provides a valid theoretical support for future studies on climate sensitivity and forced modes of variability under mean state changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le teorie della gravità scalare-tensore sono una classe di teorie alternative alla rel- atività generale in cui l’interazione gravitazionale è descritta sia dalla metrica, sia da un campo scalare. Ne costituisce un esempio caratteristico la teoria di Brans-Dicke, in- trodotta come estensione della relatività generale in modo da renderla conforme con il principio di Mach. Il presente lavoro di tesi è volto a presentare un’analisi di questa teoria nei suoi aspetti principali, studiandone i fondamenti teorici e il modello cosmologico derivante, sottolineandone inoltre i limiti e le criticità; in seguito vengono esposti i risultati degli esperimenti fino ad ora svolti per verificare fondamenti e previsioni del modello.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La mia tesi si concentra sulla sintesi e funzionalizzazione di nanoparticelle d’argento studiandone l’interazione, tramite esperimenti in vitro, con cellule sane di fibroblasti murini NIH-3T3 e cellule tumorali da nodulo al seno MCF7. L’utilizzo di polielettroliti quali PDADMAC, PAH e PSS ha permesso la modifica delle proprietà superficiali delle nanoparticelle. Le nuove proprietà chimico-fisiche sono state caratterizzate tramite Dynamic Light Scattering, potenziale zeta e spettroscopia UV-vis. L’effetto della ricopertura con polielettroliti è stato valutato tramite test di vitalità cellulare somministrando le nanoparticelle funzionalizzate alle cellule sopracitate. Successivamente, è stata ottimizzata la procedura per un’ulteriore ricopertura sulle nanoparticelle cariche con BSA (Bovine Serum Albumin) valutando diversi fattori chiave. Le nanoparticelle ricoperte di albumina sono state caratterizzate e la composizione qualitativa della loro protein corona è stata ottenuta tramite analisi SDS-PAGE. Infine, le nanoparticelle ricoperte di BSA sono state somministrate alle due linee cellulari valutando l’effetto dell’albumina sulla risposta biologica tramite analisi di vitalità cellulare e immunofluorescenza.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The BP (Bundle Protocol) version 7 has been recently standardized by IETF in RFC 9171, but it is the whole DTN (Delay-/Disruption-Tolerant Networking) architecture, of which BP is the core, that is gaining a renewed interest, thanks to its planned adoption in future space missions. This is obviously positive, but at the same time it seems to make space agencies more interested in deployment than in research, with new BP implementations that may challenge the central role played until now by the historical BP reference implementations, such as ION and DTNME. To make Unibo research on DTN independent of space agency decisions, the development of an internal BP implementation was in order. This is the goal of this thesis, which deals with the design and implementation of Unibo-BP: a novel, research-driven BP implementation, to be released as Free Software. Unibo-BP is fully compliant with RFC 9171, as demonstrated by a series of interoperability tests with ION and DTNME, and presents a few innovations, such as the ability to manage remote DTN nodes by means of the BP itself. Unibo-BP is compatible with pre-existing Unibo implementations of CGR (Contact Graph Routing) and LTP (Licklider Transmission Protocol) thanks to interfaces designed during the thesis. The thesis project also includes an implementation of TCPCLv3 (TCP Convergence Layer version 3, RFC 7242), which can be used as an alternative to LTPCL to connect with proximate nodes, especially in terrestrial networks. Summarizing, Unibo-BP is at the heart of a larger project, Unibo-DTN, which aims to implement the main components of a complete DTN stack (BP, TCPCL, LTP, CGR). Moreover, Unibo-BP is compatible with all DTNsuite applications, thanks to an extension of the Unified API library on which DTNsuite applications are based. The hope is that Unibo-BP and all the ancillary programs developed during this thesis will contribute to the growth of DTN popularity in academia and among space agencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protocols for the generation of dendritic cells (DCs) using serum as a supplementation of culture media leads to reactions due to animal proteins and disease transmissions. Several types of serum-free media (SFM), based on good manufacture practices (GMP), have recently been used and seem to be a viable option. The aim of this study was to evaluate the results of the differentiation, maturation, and function of DCs from Acute Myeloid Leukemia patients (AML), generated in SFM and medium supplemented with autologous serum (AS). DCs were analyzed by phenotype characteristics, viability, and functionality. The results showed the possibility of generating viable DCs in all the conditions tested. In patients, the X-VIVO 15 medium was more efficient than the other media tested in the generation of DCs producing IL-12p70 (p=0.05). Moreover, the presence of AS led to a significant increase of IL-10 by DCs as compared with CellGro (p=0.05) and X-Vivo15 (p=0.05) media, both in patients and donors. We concluded that SFM was efficient in the production of DCs for immunotherapy in AML patients. However, the use of AS appears to interfere with the functional capacity of the generated DCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapidity-odd directed flow (v1) measurements for charged pions, protons, and antiprotons near midrapidity (y=0) are reported in sNN=7.7, 11.5, 19.6, 27, 39, 62.4, and 200 GeV Au+Au collisions as recorded by the STAR detector at the Relativistic Heavy Ion Collider. At intermediate impact parameters, the proton and net-proton slope parameter dv1/dy|y=0 shows a minimum between 11.5 and 19.6 GeV. In addition, the net-proton dv1/dy|y=0 changes sign twice between 7.7 and 39 GeV. The proton and net-proton results qualitatively resemble predictions of a hydrodynamic model with a first-order phase transition from hadronic matter to deconfined matter, and differ from hadronic transport calculations.