865 resultados para CIF Core Sets
Resumo:
Il core catalitico della DNA polimerasi III, composto dalle tre subunità α, ε e θ, è il complesso minimo responsabile della replicazione del DNA cromosomiale in Escherichia coli. Nell'oloenzima, α ed ε possiedono rispettivamente un'attività 5'-3' polimerasica ed un'attività 3'-5' esonucleasica, mentre θ non ha funzioni enzimatiche. Il presente studio si è concentrato sulle regioni del core che interagiscono direttamente con ε, ovvero θ (interagente all'estremità N-terminale di ε) e il dominio PHP di α (interagente all'estremità C-terminale di ε), delle quali non è stato sinora identificato il ruolo. Al fine di assegnare loro una funzione sono state seguite tre linee di ricerca parallele. Innanzitutto il ruolo di θ è stato studiato utilizzando approcci ex-vivo ed in vivo. I risultati presentati in questo studio mostrano che θ incrementa significativamente la stabilità della subunità ε, intrinsecamente labile. Durante gli esperimenti condotti è stata anche identificata una nuova forma dimerica di ε. Per quanto la funzione del dimero non sia definita, si è dimostrato che esso è attivamente dissociato da θ, che potrebbe quindi fungere da suo regolatore. Inoltre, è stato ritrovato e caratterizzato il primo fenotipo di θ associato alla crescita. Per quanto concerne il dominio PHP, si è dimostrato che esso possiede un'attività pirofosfatasica utilizzando un nuovo saggio, progettato per seguire le cinetiche di reazione catalizzate da enzimi rilascianti fosfato o pirofosfato. L'idrolisi del pirofosfato catalizzata dal PHP è stata dimostrata in grado di sostenere l'attività polimerasica di α in vitro, il che suggerisce il suo possibile ruolo in vivo durante la replicazione del DNA. Infine, è stata messa a punto una nuova procedura per la coespressione e purificazione del complesso α-ε-θ
Resumo:
The aim of this thesis was to design, synthesize and develop a nanoparticle based system to be used as a chemosensor or as a label in bioanalytical applications. A versatile fluorescent functionalizable nanoarchitecture has been effectively produced based on the hydrolysis and condensation of TEOS in direct micelles of Pluronic® F 127, obtaining highly monodisperse silica - core / PEG - shell nanoparticles with a diameter of about 20 nm. Surface functionalized nanoparticles have been obtained in a one-pot procedure by chemical modification of the hydroxyl terminal groups of the surfactant. To make them fluorescent, a whole library of triethoxysilane fluorophores (mainly BODIPY based), encompassing the whole visible spectrum has been synthesized: this derivatization allows a high degree of doping, but the close proximity of the molecules inside the silica matrix leads to the development of self - quenching processes at high doping levels, with the concomitant fall of the fluorescence signal intensity. In order to bypass this parasite phenomenon, multichromophoric systems have been prepared, where highly efficient FRET processes occur, showing that this energy pathway is faster than self - quenching, recovering the fluorescence signal. The FRET efficiency remains very high even four dye nanoparticles, increasing the pseudo Stokes shift of the system, attractive feature for multiplexing analysis. These optimized nanoparticles have been successfully exploited in molecular imaging applications such as in vitro, in vivo and ex vivo imaging, proving themselves superior to conventional molecular fluorophores as signaling units.
Resumo:
This study focuses on the processes of change that firms undertake to overcome conditions of organizational rigidity and develop new dynamic capabilities, thanks to the contribution of external knowledge. When external contingencies highlight firms’ core rigidities, external actors can intervene in change projects, providing new competences to firms’ managers. Knowledge transfer and organizational learning processes can lead to the development of new dynamic capabilities. Existing literature does not completely explain how these processes develop and how external knowledge providers, as management consultants, influence them. Dynamic capabilities literature has become very rich in the last years; however, the models that explain how dynamic capabilities evolve are not particularly investigated. Adopting a qualitative approach, this research proposes four relevant case studies in which external actors introduce new knowledge within organizations, activating processes of change. Each case study consists of a management consulting project. Data are collected through in-depth interviews with consultants and managers. A large amount of documents supports evidences from interviews. A narrative approach is adopted to account for change processes and a synthetic approach is proposed to compare case studies along relevant dimensions. This study presents a model of capabilities evolution, supported by empirical evidence, to explain how external knowledge intervenes in capabilities evolution processes: first, external actors solve gaps between environmental demands and firms’ capabilities, changing organizational structures and routines; second, a knowledge transfer between consultants and managers leads to the creation of new ordinary capabilities; third, managers can develop new dynamic capabilities through a deliberate learning process that internalizes new tacit knowledge from consultants. After the end of the consulting project, two elements can influence the deliberate learning process: new external contingencies and changes in the perceptions about external actors.
Resumo:
Spinal cord injury (SCI) results not only in paralysis; but it is also associated with a range of autonomic dysregulation that can interfere with cardiovascular, bladder, bowel, temperature, and sexual function. The entity of the autonomic dysfunction is related to the level and severity of injury to descending autonomic (sympathetic) pathways. For many years there was limited awareness of these issues and the attention given to them by the scientific and medical community was scarce. Yet, even if a new system to document the impact of SCI on autonomic function has recently been proposed, the current standard of assessment of SCI (American Spinal Injury Association (ASIA) examination) evaluates motor and sensory pathways, but not severity of injury to autonomic pathways. Beside the severe impact on quality of life, autonomic dysfunction in persons with SCI is associated with increased risk of cardiovascular disease and mortality. Therefore, obtaining information regarding autonomic function in persons with SCI is pivotal and clinical examinations and laboratory evaluations to detect the presence of autonomic dysfunction and quantitate its severity are mandatory. Furthermore, previous studies demonstrated that there is an intimate relationship between the autonomic nervous system and sleep from anatomical, physiological, and neurochemical points of view. Although, even if previous epidemiological studies demonstrated that sleep problems are common in spinal cord injury (SCI), so far only limited polysomnographic (PSG) data are available. Finally, until now, circadian and state dependent autonomic regulation of blood pressure (BP), heart rate (HR) and body core temperature (BcT) were never assessed in SCI patients. Aim of the current study was to establish the association between the autonomic control of the cardiovascular function and thermoregulation, sleep parameters and increased cardiovascular risk in SCI patients.
Resumo:
Obwohl die funktionelle Magnetresonanztomographie (fMRI) interiktaler Spikes mit simultaner EEG-Ableitung bei Patienten mit fokalen Anfallsleiden seit einigen Jahren zur Lokalisation beteiligter Hirnstrukturen untersucht wird, ist sie nach wie vor eine experimentelle Methode. Um zuverlässig Ergebnisse zu erhalten, ist insbesondere die Verbesserung des Signal-zu-Rausch-Verhältnisses in der statistischen Bilddatenauswertung von Bedeutung. Frühere Untersuchungen zur sog. event-related fMRI weisen auf einen Zusammenhang zwischen Häufigkeit von Einzelreizen und nachfolgender hämodynamischer Signalantwort in der fMRI hin. Um einen möglichen Einfluss der Häufigkeit interiktaler Spikes auf die Signalantwort nachzuweisen, wurden 20 Kinder mit fokaler Epilepsie mit der EEG-fMRI untersucht. Von 11 dieser Patienten konnten die Daten ausgewertet werden. In einer zweifachen Analyse mit dem Softwarepaket SPM99 wurden die Bilddaten zuerst ausschließlich je nach Auftreten interiktaler Spikes der „Reiz“- oder „Ruhe“-Bedingung zugeordnet, unabhängig von der jeweiligen Anzahl der Spikes je Messzeitpunkt (on/off-Analyse). In einem zweiten Schritt wurden die „Reiz“- Bedingungen auch differenziert nach jeweiliger Anzahl einzelner Spikes ausgewertet (häufigkeitskorrelierte Analyse). Die Ergebnisse dieser Analysen zeigten bei 5 der 11 Patienten eine Zunahme von Sensitivität und Signifikanzen der in der fMRI nachgewiesenen Aktivierungen. Eine höhere Spezifität konnte hingegen nicht gezeigt werden. Diese Ergebnisse weisen auf eine positive Korrelation von Reizhäufigkeit und nachfolgender hämodynamischer Antwort auch bei interiktalen Spikes hin, welche für die EEG-fMRI nutzbar ist. Bei 6 Patienten konnte keine fMRI-Aktivierung nachgewiesen werden. Mögliche technische und physiologische Ursachen hierfür werden diskutiert.
Resumo:
The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.
Resumo:
During the last years great effort has been devoted to the fabrication of superhydrophobic surfaces because of their self-cleaning properties. A water drop on a superhydrophobic surface rolls off even at inclinations of only a few degrees while taking up contaminants encountered on its way. rnSuperhydrophobic, self-cleaning coatings are desirable for convenient and cost-effective maintenance of a variety of surfaces. Ideally, such coatings should be easy to make and apply, mechanically resistant, and long-term stable. None of the existing methods have yet mastered the challenge of meeting all of these criteria.rnSuperhydrophobicity is associated with surface roughness. The lotus leave, with its dual scale roughness, is one of the most efficient examples of superhydrophobic surface. This thesis work proposes a novel technique to prepare superhydrophobic surfaces that introduces the two length scale roughness by growing silica particles (~100 nm in diameter) onto micrometer-sized polystyrene particles using the well-established Stöber synthesis. Mechanical resistance is conferred to the resulting “raspberries” by the synthesis of a thin silica shell on their surface. Besides of being easy to make and handle, these particles offer the possibility for improving suitability or technical applications: since they disperse in water, multi-layers can be prepared on substrates by simple drop casting even on surfaces with grooves and slots. The solution of the main problem – stabilizing the multilayer – also lies in the design of the particles: the shells – although mechanically stable – are porous enough to allow for leakage of polystyrene from the core. Under tetrahydrofuran vapor polystyrene bridges form between the particles that render the multilayer-film stable. rnMulti-layers are good candidate to design surfaces whose roughness is preserved after scratch. If the top-most layer is removed, the roughness can still be ensured by the underlying layer.rnAfter hydrophobization by chemical vapor deposition (CVD) of a semi-fluorinated silane, the surfaces are superhydrophobic with a tilting angle of a few degrees. rnrnrn
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
The aim of this work was to study the dense cloud structures and to obtain the mass distribution of the dense cores (CMF) within the NGC6357 complex, from observations of the dust continuum at 450 and 850~$\mu$m of a 30 $\times$ 30 arcmin$^2$ region containing the H\textsc{ii} regions, G353.2+0.9 and G353.1+0.6.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
This thesis is settled within the STOCKMAPPING project, which represents one of the studies that were developed in the framework of RITMARE Flagship project. The main goals of STOCKMAPPING were the creation of a genomic mapping for stocks of demersal target species and the assembling of a database of population genomic, in order to identify stocks and stocks boundaries. The thesis focuses on three main objectives representing the core for the initial assessment of the methodologies and structure that would be applied to the entire STOCKMAPPING project: individuation of an analytical design to identify and locate stocks and stocks boundaries of Mullus barbatus, application of a multidisciplinary approach to validate biological methods and an initial assessment and improvement for the genotyping by sequencing technique utilized (2b-RAD). The first step is the individuation of an analytical design that has to take in to account the biological characteristics of red mullet and being representative for STOCKMAPPING commitments. In this framework a reduction and selection steps was needed due to budget reduction. Sampling areas were ranked according the individuation of four priorities. To guarantee a multidisciplinary approach the biological data associated to the collected samples were used to investigate differences between sampling areas and GSAs. Genomic techniques were applied to red mullet for the first time so an initial assessment of molecular protocols for DNA extraction and 2b-RAD processing were needed. At the end 192 good quality DNAs have been extracted and eight samples have been processed with 2b-RAD. Utilizing the software Stacks for sequences analyses a great number of SNPs markers among the eight samples have been identified. Several tests have been performed changing the main parameter of the Stacks pipeline in order to identify the most explicative and functional sets of parameters.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
The aim of this work is to provide a precise and accurate measurement of the 238U(n,gamma) reaction cross-section. This reaction is of fundamental importance for the design calculations of nuclear reactors, governing the behaviour of the reactor core. In particular, fast neutron reactors, which are experiencing a growing interest for their ability to burn radioactive waste, operate in the high energy region of the neutron spectrum. In this energy region inconsistencies between the existing measurements are present up to 15%, and the most recent evaluations disagree each other. In addition, the assessment of nuclear data uncertainty performed for innovative reactor systems shows that the uncertainty in the radiative capture cross-section of 238U should be further reduced to 1-3% in the energy region from 20 eV to 25 keV. To this purpose, addressed by the Nuclear Energy Agency as a priority nuclear data need, complementary experiments, one at the GELINA and two at the n_TOF facility, were scheduled within the ANDES project within the 7th Framework Project of the European Commission. The results of one of the 238U(n,gamma) measurement performed at the n_TOF CERN facility are presented in this work, carried out with a detection system constituted of two liquid scintillators. The very accurate cross section from this work is compared with the results obtained from the other measurement performed at the n_TOF facility, which exploit a different and complementary detection technique. The excellent agreement between the two data-sets points out that they can contribute to the reduction of the cross section uncertainty down to the required 1-3%.