19 resultados para multi-factor models
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
Nowadays, cities deal with unprecedented pollution and overpopulation problems, and Internet of Things (IoT) technologies are supporting them in facing these issues and becoming increasingly smart. IoT sensors embedded in public infrastructure can provide granular data on the urban environment, and help public authorities to make their cities more sustainable and efficient. Nonetheless, this pervasive data collection also raises high surveillance risks, jeopardizing privacy and data protection rights. Against this backdrop, this thesis addresses how IoT surveillance technologies can be implemented in a legally compliant and ethically acceptable fashion in smart cities. An interdisciplinary approach is embraced to investigate this question, combining doctrinal legal research (on privacy, data protection, criminal procedure) with insights from philosophy, governance, and urban studies. The fundamental normative argument of this work is that surveillance constitutes a necessary feature of modern information societies. Nonetheless, as the complexity of surveillance phenomena increases, there emerges a need to develop more fine-attuned proportionality assessments to ensure a legitimate implementation of monitoring technologies. This research tackles this gap from different perspectives, analyzing the EU data protection legislation and the United States and European case law on privacy expectations and surveillance. Specifically, a coherent multi-factor test assessing privacy expectations in public IoT environments and a surveillance taxonomy are proposed to inform proportionality assessments of surveillance initiatives in smart cities. These insights are also applied to four use cases: facial recognition technologies, drones, environmental policing, and smart nudging. Lastly, the investigation examines competing data governance models in the digital domain and the smart city, reviewing the EU upcoming data governance framework. It is argued that, despite the stated policy goals, the balance of interests may often favor corporate strategies in data sharing, to the detriment of common good uses of data in the urban context.
Resumo:
In silico methods, such as musculoskeletal modelling, may aid the selection of the optimal surgical treatment for highly complex pathologies such as scoliosis. Many musculoskeletal models use a generic, simplified representation of the intervertebral joints, which are fundamental to the flexibility of the spine. Therefore, to model and simulate the spine, a suitable representation of the intervertebral joint is crucial. The aim of this PhD was to characterise specimen-specific models of the intervertebral joint for multi-body models from experimental datasets. First, the project investigated the characterisation of a specimen-specific lumped parameter model of the intervertebral joint from an experimental dataset of a four-vertebra lumbar spine segment. Specimen-specific stiffnesses were determined with an optimisation method. The sensitivity of the parameters to the joint pose was investigate. Results showed the stiffnesses and predicted motions were highly depended on both the joint pose. Following the first study, the method was reapplied to another dataset that included six complete lumbar spine segments under three different loading conditions. Specimen-specific uniform stiffnesses across joint levels and level-dependent stiffnesses were calculated by optimisation. Specimen-specific stiffness show high inter-specimen variability and were also specific to the loading condition. Level-dependent stiffnesses are necessary for accurate kinematic predictions and should be determined independently of one another. Secondly, a framework to create subject-specific musculoskeletal models of individuals with severe scoliosis was developed. This resulted in a robust codified pipeline for creating subject-specific, severely scoliotic spine models from CT data. In conclusion, this thesis showed that specimen-specific intervertebral joint stiffnesses were highly sensitive to joint pose definition and the importance of level-dependent optimisation. Further, an open-source codified pipeline to create patient-specific scoliotic spine models from CT data was released. These studies and this pipeline can facilitate the specimen-specific characterisation of the scoliotic intervertebral joint and its incorporation into scoliotic musculoskeletal spine models.
Resumo:
The main purpose of this thesis is to go beyond two usual assumptions that accompany theoretical analysis in spin-glasses and inference: the i.i.d. (independently and identically distributed) hypothesis on the noise elements and the finite rank regime. The first one appears since the early birth of spin-glasses. The second one instead concerns the inference viewpoint. Disordered systems and Bayesian inference have a well-established relation, evidenced by their continuous cross-fertilization. The thesis makes use of techniques coming both from the rigorous mathematical machinery of spin-glasses, such as the interpolation scheme, and from Statistical Physics, such as the replica method. The first chapter contains an introduction to the Sherrington-Kirkpatrick and spiked Wigner models. The first is a mean field spin-glass where the couplings are i.i.d. Gaussian random variables. The second instead amounts to establish the information theoretical limits in the reconstruction of a fixed low rank matrix, the “spike”, blurred by additive Gaussian noise. In chapters 2 and 3 the i.i.d. hypothesis on the noise is broken by assuming a noise with inhomogeneous variance profile. In spin-glasses this leads to multi-species models. The inferential counterpart is called spatial coupling. All the previous models are usually studied in the Bayes-optimal setting, where everything is known about the generating process of the data. In chapter 4 instead we study the spiked Wigner model where the prior on the signal to reconstruct is ignored. In chapter 5 we analyze the statistical limits of a spiked Wigner model where the noise is no longer Gaussian, but drawn from a random matrix ensemble, which makes its elements dependent. The thesis ends with chapter 6, where the challenging problem of high-rank probabilistic matrix factorization is tackled. Here we introduce a new procedure called "decimation" and we show that it is theoretically to perform matrix factorization through it.
Resumo:
The motivation for the work presented in this thesis is to retrieve profile information for the atmospheric trace constituents nitrogen dioxide (NO2) and ozone (O3) in the lower troposphere from remote sensing measurements. The remote sensing technique used, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS), is a recent technique that represents a significant advance on the well-established DOAS, especially for what it concerns the study of tropospheric trace consituents. NO2 is an important trace gas in the lower troposphere due to the fact that it is involved in the production of tropospheric ozone; ozone and nitrogen dioxide are key factors in determining the quality of air with consequences, for example, on human health and the growth of vegetation. To understand the NO2 and ozone chemistry in more detail not only the concentrations at ground but also the acquisition of the vertical distribution is necessary. In fact, the budget of nitrogen oxides and ozone in the atmosphere is determined both by local emissions and non-local chemical and dynamical processes (i.e. diffusion and transport at various scales) that greatly impact on their vertical and temporal distribution: thus a tool to resolve the vertical profile information is really important. Useful measurement techniques for atmospheric trace species should fulfill at least two main requirements. First, they must be sufficiently sensitive to detect the species under consideration at their ambient concentration levels. Second, they must be specific, which means that the results of the measurement of a particular species must be neither positively nor negatively influenced by any other trace species simultaneously present in the probed volume of air. Air monitoring by spectroscopic techniques has proven to be a very useful tool to fulfill these desirable requirements as well as a number of other important properties. During the last decades, many such instruments have been developed which are based on the absorption properties of the constituents in various regions of the electromagnetic spectrum, ranging from the far infrared to the ultraviolet. Among them, Differential Optical Absorption Spectroscopy (DOAS) has played an important role. DOAS is an established remote sensing technique for atmospheric trace gases probing, which identifies and quantifies the trace gases in the atmosphere taking advantage of their molecular absorption structures in the near UV and visible wavelengths of the electromagnetic spectrum (from 0.25 μm to 0.75 μm). Passive DOAS, in particular, can detect the presence of a trace gas in terms of its integrated concentration over the atmospheric path from the sun to the receiver (the so called slant column density). The receiver can be located at ground, as well as on board an aircraft or a satellite platform. Passive DOAS has, therefore, a flexible measurement configuration that allows multiple applications. The ability to properly interpret passive DOAS measurements of atmospheric constituents depends crucially on how well the optical path of light collected by the system is understood. This is because the final product of DOAS is the concentration of a particular species integrated along the path that radiation covers in the atmosphere. This path is not known a priori and can only be evaluated by Radiative Transfer Models (RTMs). These models are used to calculate the so called vertical column density of a given trace gas, which is obtained by dividing the measured slant column density to the so called air mass factor, which is used to quantify the enhancement of the light path length within the absorber layers. In the case of the standard DOAS set-up, in which radiation is collected along the vertical direction (zenith-sky DOAS), calculations of the air mass factor have been made using “simple” single scattering radiative transfer models. This configuration has its highest sensitivity in the stratosphere, in particular during twilight. This is the result of the large enhancement in stratospheric light path at dawn and dusk combined with a relatively short tropospheric path. In order to increase the sensitivity of the instrument towards tropospheric signals, measurements with the telescope pointing the horizon (offaxis DOAS) have to be performed. In this circumstances, the light path in the lower layers can become very long and necessitate the use of radiative transfer models including multiple scattering, the full treatment of atmospheric sphericity and refraction. In this thesis, a recent development in the well-established DOAS technique is described, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS). The MAX-DOAS consists in the simultaneous use of several off-axis directions near the horizon: using this configuration, not only the sensitivity to tropospheric trace gases is greatly improved, but vertical profile information can also be retrieved by combining the simultaneous off-axis measurements with sophisticated RTM calculations and inversion techniques. In particular there is a need for a RTM which is capable of dealing with all the processes intervening along the light path, supporting all DOAS geometries used, and treating multiple scattering events with varying phase functions involved. To achieve these multiple goals a statistical approach based on the Monte Carlo technique should be used. A Monte Carlo RTM generates an ensemble of random photon paths between the light source and the detector, and uses these paths to reconstruct a remote sensing measurement. Within the present study, the Monte Carlo radiative transfer model PROMSAR (PROcessing of Multi-Scattered Atmospheric Radiation) has been developed and used to correctly interpret the slant column densities obtained from MAX-DOAS measurements. In order to derive the vertical concentration profile of a trace gas from its slant column measurement, the AMF is only one part in the quantitative retrieval process. One indispensable requirement is a robust approach to invert the measurements and obtain the unknown concentrations, the air mass factors being known. For this purpose, in the present thesis, we have used the Chahine relaxation method. Ground-based Multiple AXis DOAS, combined with appropriate radiative transfer models and inversion techniques, is a promising tool for atmospheric studies in the lower troposphere and boundary layer, including the retrieval of profile information with a good degree of vertical resolution. This thesis has presented an application of this powerful comprehensive tool for the study of a preserved natural Mediterranean area (the Castel Porziano Estate, located 20 km South-West of Rome) where pollution is transported from remote sources. Application of this tool in densely populated or industrial areas is beginning to look particularly fruitful and represents an important subject for future studies.
Resumo:
Natural hazard related to the volcanic activity represents a potential risk factor, particularly in the vicinity of human settlements. Besides to the risk related to the explosive and effusive activity, the instability of volcanic edifices may develop into large landslides often catastrophically destructive, as shown by the collapse of the northern flank of Mount St. Helens in 1980. A combined approach was applied to analyse slope failures that occurred at Stromboli volcano. SdF slope stability was evaluated by using high-resolution multi-temporal DTMMs and performing limit equilibrium stability analyses. High-resolution topographical data collected with remote sensing techniques and three-dimensional slope stability analysis play a key role in understanding instability mechanism and the related risks. Analyses carried out on the 2002–2003 and 2007 Stromboli eruptions, starting from high-resolution data acquired through airborne remote sensing surveys, permitted the estimation of the lava volumes emplaced on the SdF slope and contributed to the investigation of the link between magma emission and slope instabilities. Limit Equilibrium analyses were performed on the 2001 and 2007 3D models, in order to simulate the slope behavior before 2002-2003 landslide event and after the 2007 eruption. Stability analyses were conducted to understand the mechanisms that controlled the slope deformations which occurred shortly after the 2007 eruption onset, involving the upper part of slope. Limit equilibrium analyses applied to both cases yielded results which are congruent with observations and monitoring data. The results presented in this work undoubtedly indicate that hazard assessment for the island of Stromboli should take into account the fact that a new magma intrusion could lead to further destabilisation of the slope, which may be more significant than the one recently observed because it will affect an already disarranged deposit and fractured and loosened crater area. The two-pronged approach based on the analysis of 3D multi-temporal mapping datasets and on the application of LE methods contributed to better understanding volcano flank behaviour and to be prepared to undertake actions aimed at risk mitigation.
Resumo:
Unlike traditional wireless networks, characterized by the presence of last-mile, static and reliable infrastructures, Mobile ad Hoc Networks (MANETs) are dynamically formed by collections of mobile and static terminals that exchange data by enabling each other's communication. Supporting multi-hop communication in a MANET is a challenging research area because it requires cooperation between different protocol layers (MAC, routing, transport). In particular, MAC and routing protocols could be considered mutually cooperative protocol layers. When a route is established, the exposed and hidden terminal problems at MAC layer may decrease the end-to-end performance proportionally with the length of each route. Conversely, the contention at MAC layer may cause a routing protocol to respond by initiating new routes queries and routing table updates. Multi-hop communication may also benefit the presence of pseudo-centralized virtual infrastructures obtained by grouping nodes into clusters. Clustering structures may facilitate the spatial reuse of resources by increasing the system capacity: at the same time, the clustering hierarchy may be used to coordinate transmissions events inside the network and to support intra-cluster routing schemes. Again, MAC and clustering protocols could be considered mutually cooperative protocol layers: the clustering scheme could support MAC layer coordination among nodes, by shifting the distributed MAC paradigm towards a pseudo-centralized MAC paradigm. On the other hand, the system benefits of the clustering scheme could be emphasized by the pseudo-centralized MAC layer with the support for differentiated access priorities and controlled contention. In this thesis, we propose cross-layer solutions involving joint design of MAC, clustering and routing protocols in MANETs. As main contribution, we study and analyze the integration of MAC and clustering schemes to support multi-hop communication in large-scale ad hoc networks. A novel clustering protocol, named Availability Clustering (AC), is defined under general nodes' heterogeneity assumptions in terms of connectivity, available energy and relative mobility. On this basis, we design and analyze a distributed and adaptive MAC protocol, named Differentiated Distributed Coordination Function (DDCF), whose focus is to implement adaptive access differentiation based on the node roles, which have been assigned by the upper-layer's clustering scheme. We extensively simulate the proposed clustering scheme by showing its effectiveness in dominating the network dynamics, under some stressing mobility models and different mobility rates. Based on these results, we propose a possible application of the cross-layer MAC+Clustering scheme to support the fast propagation of alert messages in a vehicular environment. At the same time, we investigate the integration of MAC and routing protocols in large scale multi-hop ad-hoc networks. A novel multipath routing scheme is proposed, by extending the AOMDV protocol with a novel load-balancing approach to concurrently distribute the traffic among the multiple paths. We also study the composition effect of a IEEE 802.11-based enhanced MAC forwarding mechanism called Fast Forward (FF), used to reduce the effects of self-contention among frames at the MAC layer. The protocol framework is modelled and extensively simulated for a large set of metrics and scenarios. For both the schemes, the simulation results reveal the benefits of the cross-layer MAC+routing and MAC+clustering approaches over single-layer solutions.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
It is well known that the deposition of gaseous pollutants and aerosols plays a major role in causing the deterioration of monuments and built cultural heritage in European cities. Despite of many studies dedicated to the environmental damage of cultural heritage, in case of cement mortars, commonly used in the 20th century architecture, the deterioration due to air multipollutants impact, especially the formation of black crusts, is still not well explored making this issue a challenging area of research. This work centers on cement mortars – environment interactions, focusing on the diagnosis of the damage on the modern built heritage due to air multi-pollutants. For this purpose three sites, exposed to different urban areas in Europe, were selected for sampling and subsequent laboratory analyses: Centennial Hall, Wroclaw (Poland), Chiesa dell'Autostrada del Sole, Florence (Italy), Casa Galleria Vichi, Florence (Italy). The sampling sessions were performed taking into account the height from the ground level and protection from rain run off (sheltered, partly sheltered and exposed areas). The complete characterization of collected damage layer and underlying materials was performed using a range of analytical techniques: optical and scanning electron microscopy, X ray diffractometry, differential and gravimetric thermal analysis, ion chromatography, flash combustion/gas chromatographic analysis, inductively coupled plasma-optical emission spectrometer. The data were elaborated using statistical methods (i.e. principal components analyses) and enrichment factor for cement mortars was calculated for the first time. The results obtained from the experimental activity performed on the damage layers indicate that gypsum, due to the deposition of atmospheric sulphur compounds, is the main damage product at surfaces sheltered from rain run-off at Centennial Hall and Casa Galleria Vichi. By contrast, gypsum has not been identified in the samples collected at Chiesa dell'Autostrada del Sole. This is connected to the restoration works, particularly surface cleaning, regularly performed for the maintenance of the building. Moreover, the results obtained demonstrated the correlation between the location of the building and the composition of the damage layer: Centennial Hall is mainly undergoing to the impact of pollutants emitted from the close coal power stations, whilst Casa Galleria Vichi is principally affected by pollutants from vehicular exhaust in front of the building.
Resumo:
Grape berry is considered a non climacteric fruit, but there are some evidences that ethylene plays a role in the control of berry ripening. This PhD thesis aimed to give insights in the role of ethylene and ethylene-related genes in the regulation of grape berry ripening. During this study a small increase in ethylene concentration one week before véraison has been measured in Vitis vinifera L. ‘Pinot Noir’ grapes confirming previous findings in ‘Cabernet Sauvignon’. In addition, ethylene-related genes have been identified in the grapevine genome sequence. Similarly to other species, biosynthesis and ethylene receptor genes are present in grapevine as multi-gene families and their expression appeared tissue or developmental specific. All the other elements of the ethylene signal transduction cascade were also identified in the grape genome. Among them, there were ethylene response factors (ERF) which modulate the transcription of many effector genes in response to ethylene. In this study seven grapevine ERFs have been characterized and they showed tissue and berry development specific expression profiles. Two sequences, VvERF045 and VvERF063, seemed likely involved in berry ripening control due to their expression profiles and their sequence annotation. VvERF045 was induced before véraison and was specific of the ripe berry, by sequence similarity it was likely a transcription activator. VvERF063 displayed high sequence similarity to repressors of transcription and its expression, very high in green berries, was lowest at véraison and during ripening. To functionally characterize VvERF045 and VvERF063, a stable transformation strategy was chosen. Both sequences were cloned in vectors for over-expression and silencing and transferred in grape by Agrobacterium-mediated or biolistic-mediated gene transfer. In vitro, transgenic VvERF045 over-expressing plants displayed an epinastic phenotype whose extent was correlated to the transgene expression level. Four pathogen stress response genes were significantly induced in the transgenic plants, suggesting a putative function of VvERF045 in biotic stress defense during berry ripening. Further molecular analysis on the transgenic plants will help in identifying the actual VvERF045 target genes and together with the phenotypic characterization of the adult transgenic plants, will allow to extensively define the role of VvERF045 in berry ripening.
Resumo:
The thesis is set in three different parts, according to the relative experimental models. First, the domestic pig (Sus scrofa) is part of the study on reproductive biotechnologies: the transgenesis technique of Sperm Mediated Gene Transfer is widely studied starting from the quality of the semen, through the study of multiple uptakes of exogenous DNA and lastly used in the production of multi-transgenic blastocysts. Finally we managed to couple the transgenesis pipeline with sperm sorting and therefore produced transgenic embryos of predetermined sex. In the second part of the thesis the attention is on the fruit fly (Drosophila melanogaster) and on its derived cell line: the S2 cells. The in vitro and in vivo models are used to develop and validate an efficient way to knock down the myc gene. First an efficient in vitro protocol is described, than we demonstrate how the decrease in myc transcript remarkably affects the ribosome biogenesis through the study of Polysome gradients, rRNA content and qPCR. In vivo we identified two optimal drivers for the conditional silencing of myc, once the flies are fed with RU486: the first one is throughout the whole body (Tubulin), while the second is a head fat body driver (S32). With these results we present a very efficient model to study the role of myc in multiple aspects of translation. In the third and last part, the focus is on human derived lung fibroblasts (hLF-1), mouse tail fibroblasts and mouse tissues. We developed an efficient assay to quantify the total protein content of the nucleus on a single cell level via fluorescence. We coupled the protocol with classical immunofluorescence so to have at the same time general and particular information, demonstrating that during senescence nuclear proteins increase by 1.8 fold either in human cells, mouse cells and mouse tissues.
Resumo:
Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.
Resumo:
The Alzheimer’s disease (AD), the most prevalent form of age-related dementia, is a multifactorial and heterogeneous neurodegenerative disease. The molecular mechanisms underlying the pathogenesis of AD are yet largely unknown. However, the etiopathogenesis of AD likely resides in the interaction between genetic and environmental risk factors. Among the different factors that contribute to the pathogenesis of AD, amyloid-beta peptides and the genetic risk factor apoE4 are prominent on the basis of genetic evidence and experimental data. ApoE4 transgenic mice have deficits in spatial learning and memory associated with inflammation and brain atrophy. Evidences suggest that apoE4 is implicated in amyloid-beta accumulation, imbalance of cellular antioxidant system and in apoptotic phenomena. The mechanisms by which apoE4 interacts with other AD risk factors leading to an increased susceptibility to the dementia are still unknown. The aim of this research was to provide new insights into molecular mechanisms of AD neurodegeneration, investigating the effect of amyloid-beta peptides and apoE4 genotype on the modulation of genes and proteins differently involved in cellular processes related to aging and oxidative balance such as PIN1, SIRT1, PSEN1, BDNF, TRX1 and GRX1. In particular, we used human neuroblastoma cells exposed to amyloid-beta or apoE3 and apoE4 proteins at different time-points, and selected brain regions of human apoE3 and apoE4 targeted replacement mice, as in vitro and in vivo models, respectively. All genes and proteins studied in the present investigation are modulated by amyloid-beta and apoE4 in different ways, suggesting their involvement in the neurodegenerative mechanisms underlying the AD. Finally, these proteins might represent novel potential diagnostic and therapeutic targets in AD.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.