6 resultados para Granularity

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex Networks analysis turn out to be a very promising field of research, testified by many research projects and works that span different fields. Those analysis have been usually focused on characterize a single aspect of the system and a study that considers many informative axes along with a network evolve is lacking. We propose a new multidimensional analysis that is able to inspect networks in the two most important dimensions, space and time. To achieve this goal, we studied them singularly and investigated how the variation of the constituting parameters drives changes to the network as a whole. By focusing on space dimension, we characterized spatial alteration in terms of abstraction levels. We proposed a novel algorithm that, by applying a fuzziness function, can reconstruct networks under different level of details. We verified that statistical indicators depend strongly on the granularity with which a system is described and on the class of networks. We keep fixed the space axes and we isolated the dynamics behind networks evolution process. We detected new instincts that trigger social networks utilization and spread the adoption of novel communities. We formalized this enhanced social network evolution by adopting special nodes (called sirens) that, thanks to their ability to attract new links, were able to construct efficient connection patterns. We simulated the dynamics of the system by considering three well-known growth models. Applying this framework to real and synthetic networks, we showed that the sirens, even when used for a limited time span, effectively shrink the time needed to get a network in mature state. In order to provide a concrete context of our findings, we formalized the cost of setting up such enhancement and provided the best combinations of system's parameters, such as number of sirens, time span of utilization and attractiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study is to investigate on some molecular mechanisms contributing to the pathogenesis of osteoarthritis (OA) and in particular to the senescence of articular chondrocytes. It is focused on understanding molecular events downstream GSK3β inactivation or dependent on the activity of IKKα, a kinase that does not belong to the phenotype of healthy articular chondrocytes. Moreover, the potential of some nutraceuticals on scavenging ROS thus reducing oxidative stress, DNA damage, and chondrocyte senescence has been evaluated in vitro. The in vitro LiCl-mediated GSK3β inactivation resulted in increased mitochondrial ROS production, that impacted on cellular proliferation, with S-phase transient arrest, increased SA-β gal and PAS staining, cell size and granularity. ROS are also responsible for the of increased expression of two major oxidative lesions, i.e. 1) double strand breaks, tagged by γH2AX, that associates with activation of GADD45β and p21, and 2) 8-oxo-dG adducts, that associate with increased IKKα and MMP-10 expression. The pattern observed in vitro was confirmed on cartilage from OA patients. IKKa dramatically affects the intensity of the DNA damage response induced by oxidative stress (H2O2 exposure) in chondrocytes, as evidenced by silencing strategies. At early time point an higher percentage of γH2AX positive cells and more foci in IKKa-KD cells are observed, but IKKa KD cells proved to almost completely recover after 24 hours respect to their controls. Telomere attrition is also reduced in IKKaKD. Finally MSH6 and MLH1 genes are up-regulated in IKKαKD cells but not in control cells. Hydroxytyrosol and Spermidine have a great ROS scavenging capacity in vitro. Both treatments revert the H2O2 dependent increase of cell death and γH2AX-foci formation and senescence, suggesting the ability of increasing cell homeostasis. These data indicate that nutraceuticals represent a great challenge in OA management, for both therapeutical and preventive purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Short Baseline Neutrino Program at Fermilab aims to confirm or definitely rule out the existence of sterile neutrinos at the eV mass scale. The program will perform the most sensitive search in both the nue appearance and numu disappearance channels along the Booster Neutrino Beamline. The far detector, ICARUS-T600, is a high-granularity Liquid Argon Time Projection Chamber located at 600 m from the Booster neutrino source and at shallow depth, thus exposed to a large flux of cosmic particles. Additionally, ICARUS is located 6 degrees off axis with respect to the Neutrino beam from the Main Injector. This thesis presents the construction, installation and commissioning of the ICARUS Cosmic Ray Tagger system, providing a 4 pi coverage of the active liquid argon volume. By exploiting only the precise nanosecond scale synchronization of the cosmic tagger and the PMT optical flashes it is possible to determine if an event was likely triggered by a cosmic particle. The results show that using the Top Cosmic Ray Tagger alone a conservative rejection larger than 65% of the cosmic induced background can be achieved. Additionally, by requiring the absence of hits in the whole cosmic tagger system it is possible to perform a pre-selection of contained neutrino events ahead of the full event reconstruction.