32 resultados para categorization IT PFC computational neuroscience model HMAX

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of mitochondrial dysfunction in cancer has long been a subject of great interest. In this study, such dysfunction has been examined with regards to thyroid oncocytoma, a rare form of cancer, accounting for less than 5% of all thyroid cancers. A peculiar characteristic of thyroid oncocytic cells is the presence of an abnormally large number of mitochondria in the cytoplasm. Such mitochondrial hyperplasia has also been observed in cells derived from patients suffering from mitochondrial encephalomyopathies, where mutations in the mitochondrial DNA(mtDNA) encoding the respiratory complexes result in oxidative phosphorylation dysfunction. An increase in the number of mitochondria occurs in the latter in order to compensate for the respiratory deficiency. This fact spurred the investigation into the presence of analogous mutations in thyroid oncocytic cells. In this study, the only available cell model of thyroid oncocytoma was utilised, the XTC-1 cell line, established from an oncocytic thyroid metastasis to the breast. In order to assess the energetic efficiency of these cells, they were incubated in a medium lacking glucose and supplemented instead with galactose. When subjected to such conditions, glycolysis is effectively inhibited and the cells are forced to use the mitochondria for energy production. Cell viability experiments revealed that XTC-1 cells were unable to survive in galactose medium. This was in marked contrast to the TPC-1 control cell line, a thyroid tumour cell line which does not display the oncocytic phenotype. In agreement with these findings, subsequent experiments assessing the levels of cellular ATP over incubation time in galactose medium, showed a drastic and continual decrease in ATP levels only in the XTC-1 cell line. Furthermore, experiments on digitonin-permeabilised cells revealed that the respiratory dysfunction in the latter was due to a defect in complex I of the respiratory chain. Subsequent experiments using cybrids demonstrated that this defect could be attributed to the mitochondrially-encoded subunits of complex I as opposed to the nuclearencoded subunits. Confirmation came with mtDNA sequencing, which detected the presence of a novel mutation in the ND1 subunit of complex I. In addition, a mutation in the cytochrome b subunit of complex III of the respiratory chain was detected. The fact that XTC-1 cells are unable to survive when incubated in galactose medium is consistent with the fact that many cancers are largely dependent on glycolysis for energy production. Indeed, numerous studies have shown that glycolytic inhibitors are able to induce apoptosis in various cancer cell lines. Subsequent experiments were therefore performed in order to identify the mode of XTC-1 cell death when subjected to the metabolic stress imposed by the forced use of the mitochondria for energy production. Cell shrinkage and mitochondrial fragmentation were observed in the dying cells, which would indicate an apoptotic type of cell death. Analysis of additional parameters however revealed a lack of both DNA fragmentation and caspase activation, thus excluding a classical apoptotic type of cell death. Interestingly, cleavage of the actin component of the cytoskeleton was observed, implicating the action of proteases in this mode of cell demise. However, experiments employing protease inhibitors failed to identify the specific protease involved. It has been reported in the literature that overexpression of Bcl-2 is able to rescue cells presenting a respiratory deficiency. As the XTC-1 cell line is not only respiration-deficient but also exhibits a marked decrease in Bcl-2 expression, it is a perfect model with which to study the relationship between Bcl-2 and oxidative phosphorylation in respiratory-deficient cells. Contrary to the reported literature studies on various cell lines harbouring defects in the respiratory chain, Bcl-2 overexpression was not shown to increase cell survival or rescue the energetic dysfunction in XTC-1 cells. Interestingly however, it had a noticeable impact on cell adhesion and morphology. Whereas XTC-1 cells shrank and detached from the growth surface under conditions of metabolic stress, Bcl-2-overexpressing XTC-1 cells appeared much healthier and were up to 45% more adherent. The target of Bcl-2 in this setting appeared to be the actin cytoskeleton, as the cleavage observed in XTC-1 cells expressing only endogenous levels of Bcl-2, was inhibited in Bcl-2-overexpressing cells. Thus, although unable to rescue XTC-1 cells in terms of cell viability, Bcl-2 is somehow able to stabilise the cytoskeleton, resulting in modifications in cell morphology and adhesion. The mitochondrial respiratory deficiency observed in cancer cells is thought not only to cause an increased dependency on glycolysis but it is also thought to blunt cellular responses to anticancer agents. The effects of several therapeutic agents were thus assessed for their death-inducing ability in XTC-1 cells. Cell viability experiments clearly showed that the cells were more resistant to stimuli which generate reactive oxygen species (tert-butylhydroperoxide) and to mitochondrial calcium-mediated apoptotic stimuli (C6-ceramide), as opposed to stimuli inflicting DNA damage (cisplatin) and damage to protein kinases(staurosporine). Various studies in the literature have reported that the peroxisome proliferator-activated receptor-coactivator 1(PGC-1α), which plays a fundamental role in mitochondrial biogenesis, is also involved in protecting cells against apoptosis caused by the former two types of stimuli. In accordance with these observations, real-time PCR experiments showed that XTC-1 cells express higher mRNA levels of this coactivator than do the control cells, implicating its importance in drug resistance. In conclusion, this study has revealed that XTC-1 cells, like many cancer cell lines, are characterised by a reduced energetic efficiency due to mitochondrial dysfunction. Said dysfunction has been attributed to mutations in respiratory genes encoded by the mitochondrial genome. Although the mechanism of cell demise in conditions of metabolic stress is unclear, the potential of targeting thyroid oncocytic cancers using glycolytic inhibitors has been illustrated. In addition, the discovery of mtDNA mutations in XTC-1 cells has enabled the use of this cell line as a model with which to study the relationship between Bcl-2 overexpression and oxidative phosphorylation in cells harbouring mtDNA mutations and also to investigate the significance of such mutations in establishing resistance to apoptotic stimuli.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study Objectives. The use of mouse models in sleep apnea research is limited by the belief that central (CSA) but not obstructive sleep apneas (OSA) occur in rodents. With this study we wanted to develop a protocol to look for the presence of OSAs in wild-type mice and, then, to apply it to a mouse model of Down Syndrome (DS), a human pathology characterized by a high incidence of OSAs. Methods. Nine C57Bl/6J wild-type mice were implanted with electrodes for electroencephalography (EEG), neck electromyography (nEMG), diaphragmatic activity (DIA) and then placed in a whole-body-plethysmographic (WBP) chamber for 8h during the resting (light) phase to simultaneously record sleep and breathing activity. The concomitant analysis of WBP and DIA signals allowed the discrimination between CSA and OSA. The same protocol was then applied to 12 Ts65Dn mice (a validated model of DS) and 14 euploid controls. Results. OSAs represented about half of the apneic events recorded during rapid-eye-movement sleep (REMS) in each experimental group while almost only CSAs were found during non-REMS. Ts65Dn mice had similar rate of apneic events than euploid controls but a significantly higher occurrence of OSAs during REMS. Conclusions. We demonstrated for the first time that mice physiologically exhibit both CSAs and OSAs and that the latter are more prevalent in the Ts65Dn mouse model of DS. These findings indicate that mice can be used as a valid tool to accelerate the comprehension of the pathophysiology of all kind of sleep apnea and for the development of new therapeutical approaches to contrast these respiratory disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The full exploitation of multi-hop multi-path connectivity opportunities offered by heterogeneous wireless interfaces could enable innovative Always Best Served (ABS) deployment scenarios where mobile clients dynamically self-organize to offer/exploit Internet connectivity at best. Only novel middleware solutions based on heterogeneous context information can seamlessly enable this scenario: middleware solutions should i) provide a translucent access to low-level components, to achieve both fully aware and simplified pre-configured interactions, ii) permit to fully exploit communication interface capabilities, i.e., not only getting but also providing connectivity in a peer-to-peer fashion, thus relieving final users and application developers from the burden of directly managing wireless interface heterogeneity, and iii) consider user mobility as crucial context information evaluating at provision time the suitability of available Internet points of access differently when the mobile client is still or in motion. The novelty of this research work resides in three primary points. First of all, it proposes a novel model and taxonomy providing a common vocabulary to easily describe and position solutions in the area of context-aware autonomic management of preferred network opportunities. Secondly, it presents PoSIM, a context-aware middleware for the synergic exploitation and control of heterogeneous positioning systems that facilitates the development and portability of location-based services. PoSIM is translucent, i.e., it can provide application developers with differentiated visibility of data characteristics and control possibilities of available positioning solutions, thus dynamically adapting to application-specific deployment requirements and enabling cross-layer management decisions. Finally, it provides the MMHC solution for the self-organization of multi-hop multi-path heterogeneous connectivity. MMHC considers a limited set of practical indicators on node mobility and wireless network characteristics for a coarsegrained estimation of expected reliability/quality of multi-hop paths available at runtime. In particular, MMHC manages the durability/throughput-aware formation and selection of different multi-hop paths simultaneously. Furthermore, MMHC provides a novel solution based on adaptive buffers, proactively managed based on handover prediction, to support continuous services, especially by pre-fetching multimedia contents to avoid streaming interruptions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most interesting challenge of the next years will be the Air Space Systems automation. This process will involve different aspects as the Air Traffic Management, the Aircrafts and Airport Operations and the Guidance and Navigation Systems. The use of UAS (Uninhabited Aerial System) for civil mission will be one of the most important steps in this automation process. In civil air space, Air Traffic Controllers (ATC) manage the air traffic ensuring that a minimum separation between the controlled aircrafts is always provided. For this purpose ATCs use several operative avoidance techniques like holding patterns or rerouting. The use of UAS in these context will require the definition of strategies for a common management of piloted and piloted air traffic that allow the UAS to self separate. As a first employment in civil air space we consider a UAS surveillance mission that consists in departing from a ground base, taking pictures over a set of mission targets and coming back to the same ground base. During all mission a set of piloted aircrafts fly in the same airspace and thus the UAS has to self separate using the ATC avoidance as anticipated. We consider two objective, the first consists in the minimization of the air traffic impact over the mission, the second consists in the minimization of the impact of the mission over the air traffic. A particular version of the well known Travelling Salesman Problem (TSP) called Time-Dependant-TSP has been studied to deal with traffic problems in big urban areas. Its basic idea consists in a cost of the route between two clients depending on the period of the day in which it is crossed. Our thesis supports that such idea can be applied to the air traffic too using a convenient time horizon compatible with aircrafts operations. The cost of a UAS sub-route will depend on the air traffic that it will meet starting such route in a specific moment and consequently on the avoidance maneuver that it will use to avoid that conflict. The conflict avoidance is a topic that has been hardly developed in past years using different approaches. In this thesis we purpose a new approach based on the use of ATC operative techniques that makes it possible both to model the UAS problem using a TDTSP framework both to use an Air Traffic Management perspective. Starting from this kind of mission, the problem of the UAS insertion in civil air space is formalized as the UAS Routing Problem (URP). For this reason we introduce a new structure called Conflict Graph that makes it possible to model the avoidance maneuvers and to define the arc cost function of the departing time. Two Integer Linear Programming formulations of the problem are proposed. The first is based on a TDTSP formulation that, unfortunately, is weaker then the TSP formulation. Thus a new formulation based on a TSP variation that uses specific penalty to model the holdings is proposed. Different algorithms are presented: exact algorithms, simple heuristics used as Upper Bounds on the number of time steps used, and metaheuristic algorithms as Genetic Algorithm and Simulated Annealing. Finally an air traffic scenario has been simulated using real air traffic data in order to test our algorithms. Graphic Tools have been used to represent the Milano Linate air space and its air traffic during different days. Such data have been provided by ENAV S.p.A (Italian Agency for Air Navigation Services).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis deals with the patch loading of I-girder with two longitudinal stiffeners. The configuration with two longitudinal stiffeners is often an excellent solution for beams of higher than 3 meters but has not yet been discussed in EN 1993-1-5. It is proposed a model of resistance harmonized with the methods used in Eurocodes for the other problems of buckling. The model contains three significant parts: the yield resistance, the elastic critical load used to determine the slenderness parameter and a reduction factor that relates the resistance to the slenderness. The thesis is structured into eight chapters, in addition to Preface and Table of Contents. Chapter 3 is a list of all symbols used. Chapter 4 presents a review of earlier works. Chapter 5 details the experimental investigations conducted by Gozzi (2007) on three samples without longitudinal stiffeners. Due to the difficulty of completing a personal physical model testing during the doctorate, it was decided to carefully study the laboratory work by Gozzi and use it as a basis for the calibration of the numerical study. In Chapter 6 is presented the first part of the numerical study. At this stage, the laboratory tests conducted by Gozzi have been reproduced through a finite element model. It is observed a good agreement of numerical results with test data. In Chapter 7 summarizes the results of numerical analysis of the girder with two longitudinal stiffeners. Chapter 8 presents the procedure proposed for calculating the ultimate patch loading resistance of the girder with two longitudinal stiffeners. Chapter 9 contains a summary of work done in this thesis with suggestions for the most important issues for future development. Chapter 10 lists the references. There are also three appendices with test data by Gozzi and data obtained from literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il lavoro presentato ha come oggetto la ricostruzione tridimensionale della città di Bologna nella sua fase rinascimentale. Tale lavoro vuole fornire un modello 3D delle architetture e degli spazi urbani utilizzabile sia per scopi di ricerca nell’ambito della storia delle città sia per un uso didattico-divulgativo nel settore del turismo culturale. La base del lavoro è una fonte iconografica di grande importanza: l’affresco raffigurante Bologna risalente al 1575 e situato in Vaticano; questa è una veduta a volo d’uccello di grandi dimensioni dell’intero tessuto urbano bolognese all’interno della terza cerchia di mura. In esso sono rappresentate in maniera particolareggiata le architetture civili e ecclesiastiche, gli spazi ortivi e cortilivi interni agli isolati e alcune importanti strutture urbane presenti in città alla fine del Cinquecento, come l’area portuale e i canali interni alla città, oggi non più visibili. La ricostruzione tridimensionale è stata realizzata tramite Blender, software per la modellazione 3D opensource, attraverso le fasi di modellazione, texturing e creazione materiali (mediante campionamento delle principali cromie presenti nell’affresco), illuminazione e animazione. Una parte della modellazione è stata poi testata all’interno di un GIS per verificare l’utilizzo delle geometrie 3D come elementi collegabili ad altre fonti storiche relative allo sviluppo urbano e quindi sfruttabili per la ricerca storica. Grande attenzione infine è stata data all’uso dei modelli virtuali a scopo didattico-divulgativo e per il turismo culturale. La modellazione è stata utilizzata all’interno di un motore grafico 3D per costruire un ambiente virtuale interattivo nel quale un utente anche non esperto possa muoversi per esplorare gli spazi urbani della Bologna del Cinquecento. In ultimo è stato impostato lo sviluppo di un’applicazione per sistemi mobile (Iphone e Ipad) al fine di fornire uno strumento per la conoscenza della città storica in mobilità, attraverso la comparazione dello stato attuale con quello ricostruito virtualmente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis, included within the THESEUS project, is the development of a mathematical model 2DV two-phase, based on the existing code IH-2VOF developed by the University of Cantabria, able to represent together the overtopping phenomenon and the sediment transport. Several numerical simulations were carried out in order to analyze the flow characteristics on a dike crest. The results show that the seaward/landward slope does not affect the evolution of the flow depth and velocity over the dike crest whereas the most important parameter is the relative submergence. Wave heights decrease and flow velocities increase while waves travel over the crest. In particular, by increasing the submergence, the wave height decay and the increase of the velocity are less marked. Besides, an appropriate curve able to fit the variation of the wave height/velocity over the dike crest were found. Both for the wave height and for the wave velocity different fitting coefficients were determined on the basis of the submergence and of the significant wave height. An equation describing the trend of the dimensionless coefficient c_h for the wave height was derived. These conclusions could be taken into consideration for the design criteria and the upgrade of the structures. In the second part of the thesis, new equations for the representation of the sediment transport in the IH-2VOF model were introduced in order to represent beach erosion while waves run-up and overtop the sea banks during storms. The new model allows to calculate sediment fluxes in the water column together with the sediment concentration. Moreover it is possible to model the bed profile evolution. Different tests were performed under low-intensity regular waves with an homogeneous layer of sand on the bottom of a channel in order to analyze the erosion-deposition patterns and verify the model results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis reports five studies that may contribute to understand how weaning affects the immune and intestinal microbiota maturation of the piglet and proposes some possible nutritional strategies to attenuate its negative effects. The first study showed that weaning is associated in Payer’s patches with the activation of MHC response against class I antigens and that related to the stimulation to IFN-γ and showed, for the first time, that their blood at weaning remains dominated by immature blood cells. In the second study we tested if the use of a live vaccine against a conditionally but also genetically based intestinal disease, like PWD, could have an impact on the growth performance of pigs and their intestinal microbiota and if it could provide a model to test the response to nutritional strategies under conditions of an immune and intestinal stimulation for animals susceptible to ETEC type. In this study, we demonstrated how a vaccinal strain of F4/F18 E. coli can affect the gut microbial composition of piglets, regardless of their genetic susceptibility to ETEC infection. In the third study we evidenced how a nucleotide supplementation can favor the proliferation of jejunal Peyer patches and anticipate the maturation of the fecal microbiota. In the fourth study we reported how xylanase can favor the proliferation of Lactobacillus reuteri. Finally, we showed some first results on the muscles fiber development in fast- and slow-growing suckling pigs and the relationship with the intestinal microbiota. Taken together, the results presented in this thesis provide new insight about the interplay between the host-genetics, gut microbial composition, and host physiological status. Furthermore, it provides confirmation that the use of known genetic markers for ETEC F4 and F18 could represent a potential tool to stratify the animals in the trials both in healthy or challenge-based protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Service-learning in higher education is gaining attention as a reliable tool to support students’ learning and fulfil the mission of higher education institutions (HEIs). This dissertation addresses existing gaps in the literature by examining the effects and perspectives of service-learning in HEIs through three studies. The first study compares the effects of a voluntary semester-long service-learning course with traditional courses. A survey completed by 110 students before and after the lectures found no significant group differences in the psychosocial variables under inspection. Nevertheless, service-learning students showed higher scores concerning the quality of participation. Factors such as students’ perception of competence, duration of service-learning, and self-reported measures may have influenced the results. The second study explores the under-researched perspective of community partners in higher education and European settings. Twelve semi-structured interviews were conducted with community partners from various community organisations across Europe. The results highlight positive effects on community members and organisations, intrinsic motivations, organisational empowerment, different forms of reciprocity, the co-educational role of community partners, and the significant role of a sense of community and belonging. The third study focuses on faculty perspectives on service-learning in the European context. Twenty-two semi-structured interviews were conducted in 14 European countries. The findings confirm the transformative impact of service-learning on the community, students, teachers, and HEIs, emphasising the importance of motivation and institutionalisation processes in sustaining engaged scholarship. The study also identifies the relevance of the community experience, sense of community, and community responsibility with the service-learning experience; relatedness is proposed as the fifth pillar of service-learning. Overall, this dissertation provides new insights into the effects and perspectives of service-learning in higher education. It integrates the 4Rs model with the addition of relatedness, guiding the theoretical and practical implications of the findings. The dissertation also suggests limitations and areas for further research.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Cancer is a challenging disease that involves multiple types of biological interactions in different time and space scales. Often computational modelling has been facing problems that, in the current technology level, is impracticable to represent in a single space-time continuum. To handle this sort of problems, complex orchestrations of multiscale models is frequently done. PRIMAGE is a large EU project that aims to support personalized childhood cancer diagnosis and prognosis. The goal is to do so predicting the growth of the solid tumour using multiscale in-silico technologies. The project proposes an open cloud-based platform to support decision making in the clinical management of paediatric cancers. The orchestration of predictive models is in general complex and would require a software framework that support and facilitate such task. The present work, proposes the development of an updated framework, referred herein as the VPH-HFv3, as a part of the PRIMAGE project. This framework, a complete re-writing with respect to the previous versions, aims to orchestrate several models, which are in concurrent development, using an architecture as simple as possible, easy to maintain and with high reusability. This sort of problem generally requires unfeasible execution times. To overcome this problem was developed a strategy of particularisation, which maps the upper-scale model results into a smaller number and homogenisation which does the inverse way and analysed the accuracy of this approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.