10 resultados para Simulation with multiple Consumers Profiles

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction and Background: Multiple system atrophy (MSA) is a sporadic, adult-onset, progressive neurodegenerative disease characterized clinically by parkinsonism, cerebellar ataxia, and autonomic failure. We investigated cognitive functions longitudinally in a group of probable MSA patients, matching data with sleep parameters. Patients and Methods: 10 patients (7m/3f) underwent a detailed interview, a general and neurological examination, laboratory exams, MRI scans, a cardiovascular reflexes study, a battery of neuropsychological tests, and video-polysomnographic recording (VPSG). Patients were revaluated (T1) a mean of 16±5 (range: 12-28) months after the initial evaluation (T0). At T1, the neuropsychological assessment and VPSG were repeated. Results: The mean patient age was 57.8±6.4 years (range: 47-64) with a mean age at disease onset of 53.2±7.1 years (range: 43-61) and symptoms duration at T0 of 60±48 months (range: 12-144). At T0, 7 patients showed no cognitive deficits while 3 patients showed isolated cognitive deficits. At T1, 1 patient worsened developing multiple cognitive deficits from a normal condition. At T0 and T1, sleep efficiency was reduced, REM latency increased, NREM sleep stages 1-2 slightly increased. Comparisons between T1 and T0 showed a significant worsening in two tests of attention and no significant differences of VPSG parameters. No correlation was found between neuropsychological results and VPSG findings or RBD duration. Discussion and Conclusions: The majority of our patients do not show any cognitive deficits at T0 and T1, while isolated cognitive deficits are present in the remaining patients. Attention is the cognitive function which significantly worsened. Our data confirm the previous findings concerning the prevalence, type and the evolution of cognitive deficits in MSA. Regarding the developing of a condition of dementia, our data did not show a clear-cut diagnosis of dementia. We confirm a mild alteration of sleep structure. RBD duration does not correlate with neuropsychological findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamical models of stellar systems represent a powerful tool to study their internal structure and dynamics, to interpret the observed morphological and kinematical fields, and also to support numerical simulations of their evolution. We present a method especially designed to build axisymmetric Jeans models of galaxies, assumed as stationary and collisionless stellar systems. The aim is the development of a rigorous and flexible modelling procedure of multicomponent galaxies, composed of different stellar and dark matter distributions, and a central supermassive black hole. The stellar components, in particular, are intended to represent different galaxy structures, such as discs, bulges, halos, and can then have different structural (density profile, flattening, mass, scale-length), dynamical (rotation, velocity dispersion anisotropy), and population (age, metallicity, initial mass function, mass-to-light ratio) properties. The theoretical framework supporting the modelling procedure is presented, with the introduction of a suitable nomenclature, and its numerical implementation is discussed, with particular reference to the numerical code JASMINE2, developed for this purpose. We propose an approach for efficiently scaling the contributions in mass, luminosity, and rotational support, of the different matter components, allowing for fast and flexible explorations of the model parameter space. We also offer different methods of the computation of the gravitational potentials associated of the density components, especially convenient for their easier numerical tractability. A few galaxy models are studied, showing internal, and projected, structural and dynamical properties of multicomponent galaxies, with a focus on axisymmetric early-type galaxies with complex kinematical morphologies. The application of galaxy models to the study of initial conditions for hydro-dynamical and $N$-body simulations of galaxy evolution is also addressed, allowing in particular to investigate the large number of interesting combinations of the parameters which determine the structure and dynamics of complex multicomponent stellar systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis was to improve the commercial CFD software Ansys Fluent to obtain a tool able to perform accurate simulations of flow boiling in the slug flow regime. The achievement of a reliable numerical framework allows a better understanding of the bubble and flow dynamics induced by the evaporation and makes possible the prediction of the wall heat transfer trends. In order to save computational time, the flow is modeled with an axisymmetrical formulation. Vapor and liquid phases are treated as incompressible and in laminar flow. By means of a single fluid approach, the flow equations are written as for a single phase flow, but discontinuities at the interface and interfacial effects need to be accounted for and discretized properly. Ansys Fluent provides a Volume Of Fluid technique to advect the interface and to map the discontinuous fluid properties throughout the flow domain. The interfacial effects are dominant in the boiling slug flow and the accuracy of their estimation is fundamental for the reliability of the solver. Self-implemented functions, developed ad-hoc, are introduced within the numerical code to compute the surface tension force and the rates of mass and energy exchange at the interface related to the evaporation. Several validation benchmarks assess the better performances of the improved software. Various adiabatic configurations are simulated in order to test the capability of the numerical framework in modeling actual flows and the comparison with experimental results is very positive. The simulation of a single evaporating bubble underlines the dominant effect on the global heat transfer rate of the local transient heat convection in the liquid after the bubble transit. The simulation of multiple evaporating bubbles flowing in sequence shows that their mutual influence can strongly enhance the heat transfer coefficient, up to twice the single phase flow value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: MPLC represents a diagnostic challenge. Topic of the discussion is how to distinguish these patients as a metastatic or a multifocal disease. While in case of the different histology there are less doubt on the opposite in case of same histology is mandatory to investigate on other clinical features to rule out this question. Matherials and Methods: A retrospective review identified all patients treated surgically for a presumed diagnosis of SPLC. Pre-operative staging was obtained with Total CT scan and fluoro-deoxy positron emission tomography and mediastinoscopy. Patients with nodes interest or extra-thoracic location were excluded from this study. Epidermal growth factor receptor (EGFR) expression with complete immunohistochemical analisis was evaluated. Survival was estimated using Kaplan-Meyer method, and clinical features were estimated using a long-rank test or Cox proportional hazards model for categorical and continuous variable, respectively. Results: According to American College Chest Physician, 18 patients underwent to surgical resection for a diagnosis of MPLC. Of these, 8 patients had 3 or more nodules while 10 patients had less than 3 nodules. Pathologic examination demonstrated that 13/18(70%) of patients with multiple histological types was Adenocarcinoma, 2/18(10%) Squamous carcinoma, 2/18(10%) large cell carcinoma and 1/18(5%) Adenosquamosu carcinoma. Expression of EGFR has been evaluated in all nodules: in 7 patients of 18 (38%) the percentage of expression of each nodule resulted different. Conclusions: MPLC represent a multifocal disease where interactions of clinical informations with biological studies reinforce the diagnosis. EGFR could contribute to differentiate the nodules. However, further researches are necessary to validate this hypothesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field of bone substitutes is highly researched an innovative material able to fill gaps with high mechanical performances and able to stimulate cell response, permitting the complete restoration of the bone portion. In this respect, the synthesis of new bioactive materials able to mimic the compositional, morphological and mechanical features of bone is considered as the elective approach for effective tissue regeneration. Hydroxyapatite (HA) is the main component of the inorganic part of bone. Additionally ionic substitution can be performed in the apatite lattice producing different effects, depending from the selected ions. Magnesium, in substitution of calcium, and carbonate, in substitution of phosphate, extensively present in the biological bones, are able to improve properties naturally present in the apatitic phase, (i.e. biomimicry, solubility e osteoinductive properties). Other ions can be used to give new useful properties, like antiresorptive or antimicrobial properties, to the apatitic phase. This thesis focused on the development of hydroxyapatite nanophases with multiple ionic substitutions including gallium, or zinc ions, in association with magnesium and carbonate, with the purpose to provide double synergistic functionality as osteogenic and antibacterial biomaterial. Were developed bioactive materials based on Sr-substituted hydroxyapatite in the form of sintered targets. The obtained targets were treated with Pulsed Plasma Deposition (PED) resulting in the deposition of thin film coatings able to improve the roughness and wettability of PEEK, enhancing its osteointegrability. Were investigated heterogeneous gas-solid reactions, addressed to the biomorphic transformations of natural 3D porous structures into bone scaffolds with biomimetic composition and hierarchical organization, for application in load-bearing sites. The kinetics of the different reactions of the process were optimized to achieve complete and controlled phase transformation, maintaining the original 3-D morphology. Massive porous scaffolds made of ion-substituted hydroxyapatite and bone-mimicking structure were developed and tested in 3-D cell culture models.