12 resultados para Core

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il concetto di "core training" è ampiamente diffuso nelle moderne metodologie di allenamento, nonostante sia spesso collegato a pratiche da campo non pienamente supportate dalla ricerca scientifica. Il termine "core" identifica il "centro funzionale del corpo" in grado di garantire una "adeguata stabilità prossimale al fine di ottimizzare la mobilità distale", sostenendo il complesso lombo-pelvico e permettendo la connessione funzionale tra arti e tronco. La muscolatura del core, cositutita principalmente dalla regione addominale e paraspinale, è ben condizionabile dall'utilizzo di superfici instabili quali Bosu e Fitball, strumenti comuni nel settore sportivo e rieducativo ma sui quali esistono teorie e conoscenze altamente eterogenee. Obiettivo della presente tesi è, dunque, da un lato quello di confrontare una varietà di esercizi eseguibili su entrambe le superfici instabili precedentemente citate al fine di determinare le attivazioni muscolari a livello del "core", dall'altro quello di valutare gli effetti indotti da un "core training" di 8 settimane su parametri performance-specifici. Lo studio, effettuato su atleti aventi una elevata esperienza in tale settore, evidenzia valori elettromiografici superiori nella Fitball e Bosu rispetto al tappeto, seppure in maniera non significativa; circa il legame con aspetti prestativi, emerge invece come un allenamento specifico ad alto impatto nervoso, modulato secondo un target razionale, consenta di incrementare l'endurance della muscolatura addominale, l'equilibrio funzionale dinamico e la capacità di trasmissione di forze arti-tronco, giustificandone un utilizzo in vari settori dell'attività motoria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il core catalitico della DNA polimerasi III, composto dalle tre subunità α, ε e θ, è il complesso minimo responsabile della replicazione del DNA cromosomiale in Escherichia coli. Nell'oloenzima, α ed ε possiedono rispettivamente un'attività 5'-3' polimerasica ed un'attività 3'-5' esonucleasica, mentre θ non ha funzioni enzimatiche. Il presente studio si è concentrato sulle regioni del core che interagiscono direttamente con ε, ovvero θ (interagente all'estremità N-terminale di ε) e il dominio PHP di α (interagente all'estremità C-terminale di ε), delle quali non è stato sinora identificato il ruolo. Al fine di assegnare loro una funzione sono state seguite tre linee di ricerca parallele. Innanzitutto il ruolo di θ è stato studiato utilizzando approcci ex-vivo ed in vivo. I risultati presentati in questo studio mostrano che θ incrementa significativamente la stabilità della subunità ε, intrinsecamente labile. Durante gli esperimenti condotti è stata anche identificata una nuova forma dimerica di ε. Per quanto la funzione del dimero non sia definita, si è dimostrato che esso è attivamente dissociato da θ, che potrebbe quindi fungere da suo regolatore. Inoltre, è stato ritrovato e caratterizzato il primo fenotipo di θ associato alla crescita. Per quanto concerne il dominio PHP, si è dimostrato che esso possiede un'attività pirofosfatasica utilizzando un nuovo saggio, progettato per seguire le cinetiche di reazione catalizzate da enzimi rilascianti fosfato o pirofosfato. L'idrolisi del pirofosfato catalizzata dal PHP è stata dimostrata in grado di sostenere l'attività polimerasica di α in vitro, il che suggerisce il suo possibile ruolo in vivo durante la replicazione del DNA. Infine, è stata messa a punto una nuova procedura per la coespressione e purificazione del complesso α-ε-θ

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis was to design, synthesize and develop a nanoparticle based system to be used as a chemosensor or as a label in bioanalytical applications. A versatile fluorescent functionalizable nanoarchitecture has been effectively produced based on the hydrolysis and condensation of TEOS in direct micelles of Pluronic® F 127, obtaining highly monodisperse silica - core / PEG - shell nanoparticles with a diameter of about 20 nm. Surface functionalized nanoparticles have been obtained in a one-pot procedure by chemical modification of the hydroxyl terminal groups of the surfactant. To make them fluorescent, a whole library of triethoxysilane fluorophores (mainly BODIPY based), encompassing the whole visible spectrum has been synthesized: this derivatization allows a high degree of doping, but the close proximity of the molecules inside the silica matrix leads to the development of self - quenching processes at high doping levels, with the concomitant fall of the fluorescence signal intensity. In order to bypass this parasite phenomenon, multichromophoric systems have been prepared, where highly efficient FRET processes occur, showing that this energy pathway is faster than self - quenching, recovering the fluorescence signal. The FRET efficiency remains very high even four dye nanoparticles, increasing the pseudo Stokes shift of the system, attractive feature for multiplexing analysis. These optimized nanoparticles have been successfully exploited in molecular imaging applications such as in vitro, in vivo and ex vivo imaging, proving themselves superior to conventional molecular fluorophores as signaling units.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study focuses on the processes of change that firms undertake to overcome conditions of organizational rigidity and develop new dynamic capabilities, thanks to the contribution of external knowledge. When external contingencies highlight firms’ core rigidities, external actors can intervene in change projects, providing new competences to firms’ managers. Knowledge transfer and organizational learning processes can lead to the development of new dynamic capabilities. Existing literature does not completely explain how these processes develop and how external knowledge providers, as management consultants, influence them. Dynamic capabilities literature has become very rich in the last years; however, the models that explain how dynamic capabilities evolve are not particularly investigated. Adopting a qualitative approach, this research proposes four relevant case studies in which external actors introduce new knowledge within organizations, activating processes of change. Each case study consists of a management consulting project. Data are collected through in-depth interviews with consultants and managers. A large amount of documents supports evidences from interviews. A narrative approach is adopted to account for change processes and a synthetic approach is proposed to compare case studies along relevant dimensions. This study presents a model of capabilities evolution, supported by empirical evidence, to explain how external knowledge intervenes in capabilities evolution processes: first, external actors solve gaps between environmental demands and firms’ capabilities, changing organizational structures and routines; second, a knowledge transfer between consultants and managers leads to the creation of new ordinary capabilities; third, managers can develop new dynamic capabilities through a deliberate learning process that internalizes new tacit knowledge from consultants. After the end of the consulting project, two elements can influence the deliberate learning process: new external contingencies and changes in the perceptions about external actors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spinal cord injury (SCI) results not only in paralysis; but it is also associated with a range of autonomic dysregulation that can interfere with cardiovascular, bladder, bowel, temperature, and sexual function. The entity of the autonomic dysfunction is related to the level and severity of injury to descending autonomic (sympathetic) pathways. For many years there was limited awareness of these issues and the attention given to them by the scientific and medical community was scarce. Yet, even if a new system to document the impact of SCI on autonomic function has recently been proposed, the current standard of assessment of SCI (American Spinal Injury Association (ASIA) examination) evaluates motor and sensory pathways, but not severity of injury to autonomic pathways. Beside the severe impact on quality of life, autonomic dysfunction in persons with SCI is associated with increased risk of cardiovascular disease and mortality. Therefore, obtaining information regarding autonomic function in persons with SCI is pivotal and clinical examinations and laboratory evaluations to detect the presence of autonomic dysfunction and quantitate its severity are mandatory. Furthermore, previous studies demonstrated that there is an intimate relationship between the autonomic nervous system and sleep from anatomical, physiological, and neurochemical points of view. Although, even if previous epidemiological studies demonstrated that sleep problems are common in spinal cord injury (SCI), so far only limited polysomnographic (PSG) data are available. Finally, until now, circadian and state dependent autonomic regulation of blood pressure (BP), heart rate (HR) and body core temperature (BcT) were never assessed in SCI patients. Aim of the current study was to establish the association between the autonomic control of the cardiovascular function and thermoregulation, sleep parameters and increased cardiovascular risk in SCI patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.