18 resultados para Functions of complex variables.

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Piezoelectrics present an interactive electromechanical behaviour that, especially in recent years, has generated much interest since it renders these materials adapt for use in a variety of electronic and industrial applications like sensors, actuators, transducers, smart structures. Both mechanical and electric loads are generally applied on these devices and can cause high concentrations of stress, particularly in proximity of defects or inhomogeneities, such as flaws, cavities or included particles. A thorough understanding of their fracture behaviour is crucial in order to improve their performances and avoid unexpected failures. Therefore, a considerable number of research works have addressed this topic in the last decades. Most of the theoretical studies on this subject find their analytical background in the complex variable formulation of plane anisotropic elasticity. This theoretical approach bases its main origins in the pioneering works of Muskelishvili and Lekhnitskii who obtained the solution of the elastic problem in terms of independent analytic functions of complex variables. In the present work, the expressions of stresses and elastic and electric displacements are obtained as functions of complex potentials through an analytical formulation which is the application to the piezoelectric static case of an approach introduced for orthotropic materials to solve elastodynamics problems. This method can be considered an alternative to other formalisms currently used, like the Stroh’s formalism. The equilibrium equations are reduced to a first order system involving a six-dimensional vector field. After that, a similarity transformation is induced to reach three independent Cauchy-Riemann systems, so justifying the introduction of the complex variable notation. Closed form expressions of near tip stress and displacement fields are therefore obtained. In the theoretical study of cracked piezoelectric bodies, the issue of assigning consistent electric boundary conditions on the crack faces is of central importance and has been addressed by many researchers. Three different boundary conditions are commonly accepted in literature: the permeable, the impermeable and the semipermeable (“exact”) crack model. This thesis takes into considerations all the three models, comparing the results obtained and analysing the effects of the boundary condition choice on the solution. The influence of load biaxiality and of the application of a remote electric field has been studied, pointing out that both can affect to a various extent the stress fields and the angle of initial crack extension, especially when non-singular terms are retained in the expressions of the electro-elastic solution. Furthermore, two different fracture criteria are applied to the piezoelectric case, and their outcomes are compared and discussed. The work is organized as follows: Chapter 1 briefly introduces the fundamental concepts of Fracture Mechanics. Chapter 2 describes plane elasticity formalisms for an anisotropic continuum (Eshelby-Read-Shockley and Stroh) and introduces for the simplified orthotropic case the alternative formalism we want to propose. Chapter 3 outlines the Linear Theory of Piezoelectricity, its basic relations and electro-elastic equations. Chapter 4 introduces the proposed method for obtaining the expressions of stresses and elastic and electric displacements, given as functions of complex potentials. The solution is obtained in close form and non-singular terms are retained as well. Chapter 5 presents several numerical applications aimed at estimating the effect of load biaxiality, electric field, considered permittivity of the crack. Through the application of fracture criteria the influence of the above listed conditions on the response of the system and in particular on the direction of crack branching is thoroughly discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The DNA topology is an important modifier of DNA functions. Torsional stress is generated when right handed DNA is either over- or underwound, producing structural deformations which drive or are driven by processes such as replication, transcription, recombination and repair. DNA topoisomerases are molecular machines that regulate the topological state of the DNA in the cell. These enzymes accomplish this task by either passing one strand of the DNA through a break in the opposing strand or by passing a region of the duplex from the same or a different molecule through a double-stranded cut generated in the DNA. Because of their ability to cut one or two strands of DNA they are also target for some of the most successful anticancer drugs used in standard combination therapies of human cancers. An effective anticancer drug is Camptothecin (CPT) that specifically targets DNA topoisomerase 1 (TOP 1). The research project of the present thesis has been focused on the role of human TOP 1 during transcription and on the transcriptional consequences associated with TOP 1 inhibition by CPT in human cell lines. Previous findings demonstrate that TOP 1 inhibition by CPT perturbs RNA polymerase (RNAP II) density at promoters and along transcribed genes suggesting an involvement of TOP 1 in RNAP II promoter proximal pausing site. Within the transcription cycle, promoter pausing is a fundamental step the importance of which has been well established as a means of coupling elongation to RNA maturation. By measuring nascent RNA transcripts bound to chromatin, we demonstrated that TOP 1 inhibition by CPT can enhance RNAP II escape from promoter proximal pausing site of the human Hypoxia Inducible Factor 1 (HIF-1) and c-MYC genes in a dose dependent manner. This effect is dependent from Cdk7/Cdk9 activities since it can be reversed by the kinases inhibitor DRB. Since CPT affects RNAP II by promoting the hyperphosphorylation of its Rpb1 subunit the findings suggest that TOP 1inhibition by CPT may increase the activity of Cdks which in turn phosphorylate the Rpb1 subunit of RNAP II enhancing its escape from pausing. Interestingly, the transcriptional consequences of CPT induced topological stress are wider than expected. CPT increased co-transcriptional splicing of exon1 and 2 and markedly affected alternative splicing at exon 11. Surprisingly despite its well-established transcription inhibitory activity, CPT can trigger the production of a novel long RNA (5’aHIF-1) antisense to the human HIF-1 mRNA and a known antisense RNA at the 3’ end of the gene, while decreasing mRNA levels. The effects require TOP 1 and are independent from CPT induced DNA damage. Thus, when the supercoiling imbalance promoted by CPT occurs at promoter, it may trigger deregulation of the RNAP II pausing, increased chromatin accessibility and activation/derepression of antisense transcripts in a Cdks dependent manner. A changed balance of antisense transcripts and mRNAs may regulate the activity of HIF-1 and contribute to the control of tumor progression After focusing our TOP 1 investigations at a single gene level, we have extended the study to the whole genome by developing the “Topo-Seq” approach which generates a map of genome-wide distribution of sites of TOP 1 activity sites in human cells. The preliminary data revealed that TOP 1 preferentially localizes at intragenic regions and in particular at 5’ and 3’ ends of genes. Surprisingly upon TOP 1 downregulation, which impairs protein expression by 80%, TOP 1 molecules are mostly localized around 3’ ends of genes, thus suggesting that its activity is essential at these regions and can be compensate at 5’ ends. The developed procedure is a pioneer tool for the detection of TOP 1 cleavage sites across the genome and can open the way to further investigations of the enzyme roles in different nuclear processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present PhD thesis was focused on the development and application of chemical methodology (Py-GC-MS) and data-processing method by multivariate data analysis (chemometrics). The chromatographic and mass spectrometric data obtained with this technique are particularly suitable to be interpreted by chemometric methods such as PCA (Principal Component Analysis) as regards data exploration and SIMCA (Soft Independent Models of Class Analogy) for the classification. As a first approach, some issues related to the field of cultural heritage were discussed with a particular attention to the differentiation of binders used in pictorial field. A marker of egg tempera the phosphoric acid esterified, a pyrolysis product of lecithin, was determined using HMDS (hexamethyldisilazane) rather than the TMAH (tetramethylammonium hydroxide) as a derivatizing reagent. The validity of analytical pyrolysis as tool to characterize and classify different types of bacteria was verified. The FAMEs chromatographic profiles represent an important tool for the bacterial identification. Because of the complexity of the chromatograms, it was possible to characterize the bacteria only according to their genus, while the differentiation at the species level has been achieved by means of chemometric analysis. To perform this study, normalized areas peaks relevant to fatty acids were taken into account. Chemometric methods were applied to experimental datasets. The obtained results demonstrate the effectiveness of analytical pyrolysis and chemometric analysis for the rapid characterization of bacterial species. Application to a samples of bacterial (Pseudomonas Mendocina), fungal (Pleorotus ostreatus) and mixed- biofilms was also performed. A comparison with the chromatographic profiles established the possibility to: • Differentiate the bacterial and fungal biofilms according to the (FAMEs) profile. • Characterize the fungal biofilm by means the typical pattern of pyrolytic fragments derived from saccharides present in the cell wall. • Individuate the markers of bacterial and fungal biofilm in the same mixed-biofilm sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex Networks analysis turn out to be a very promising field of research, testified by many research projects and works that span different fields. Those analysis have been usually focused on characterize a single aspect of the system and a study that considers many informative axes along with a network evolve is lacking. We propose a new multidimensional analysis that is able to inspect networks in the two most important dimensions, space and time. To achieve this goal, we studied them singularly and investigated how the variation of the constituting parameters drives changes to the network as a whole. By focusing on space dimension, we characterized spatial alteration in terms of abstraction levels. We proposed a novel algorithm that, by applying a fuzziness function, can reconstruct networks under different level of details. We verified that statistical indicators depend strongly on the granularity with which a system is described and on the class of networks. We keep fixed the space axes and we isolated the dynamics behind networks evolution process. We detected new instincts that trigger social networks utilization and spread the adoption of novel communities. We formalized this enhanced social network evolution by adopting special nodes (called sirens) that, thanks to their ability to attract new links, were able to construct efficient connection patterns. We simulated the dynamics of the system by considering three well-known growth models. Applying this framework to real and synthetic networks, we showed that the sirens, even when used for a limited time span, effectively shrink the time needed to get a network in mature state. In order to provide a concrete context of our findings, we formalized the cost of setting up such enhancement and provided the best combinations of system's parameters, such as number of sirens, time span of utilization and attractiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Chapter 1 I will present a brief introduction on the state of art of nanotechnologies, nanofabrication techniques and unconventional lithography as a technique to fabricate the novel electronic device as resistive switch so-called memristor is shown. In Chapter 2 a detailed description of the main fabrication and characterization techniques employed in this work is reported. Chapter 3 parallel local oxidation lithography (pLOx) describes as a main technique to obtain accurate patterning process. All the effective parameters has been studied and the optimized condition observed to highly reproducible with excellent patterned nanostructures. The effect of negative bias, calls local reduction (LR) studied. Moreover, the use of AC bias shows faster patterning process respect to DC bias. In Chapter 4 (metal/ e-SiO2/ Si nanojunction) it is shown how the electrochemical oxide nanostructures by using pLOx can be used in the fabrication of novel devices call memristor. We demonstrate a new concept, based on conventional materials, where the lifetime problem is resolved by introducing a “regeneration” step, which restores the nano-memristor to its pristine condition by applying an appropriate voltage cycle. In Chapter 5 (Graphene/ e-SiO2/ Si), Graphene as a building block material is used as an electrode to selectively oxidize the silicon substrate by pLOx set up for the fabrication of novel resistive switch device. In Chapter 6 (surface architecture) I will show another application of pLOx in biotechnology is shown. So the surface functionalization combine with nano-patterning by pLOx used to design a new surface to accurately bind biomolecules with the possibility of studying those properties and more application in nano-bio device fabrication. So, in order to obtain biochips, electronic and optical/photonics devices Nano patterning of DNA used as scaffolds to fabricate small functional nano-components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the lubrication theory is used to model flow in geological fractures and analyse the compound effect of medium heterogeneity and complex fluid rheology. Such studies are warranted as the Newtonian rheology is adopted in most numerical models because of its ease of use, despite non-Newtonian fluids being ubiquitous in subsurface applications. Past studies on Newtonian and non-Newtonian flow in single rock fractures are summarized in Chapter 1. Chapter 2 presents analytical and semi-analytical conceptual models for flow of a shear-thinning fluid in rock fractures having a simplified geometry, providing a first insight on their permeability. in Chapter 3, a lubrication-based 2-D numerical model is first implemented to solve flow of an Ellis fluid in rough fractures; the finite-volumes model developed is more computationally effective than conducting full 3-D simulations, and introduces an acceptable approximation as long as the flow is laminar and the fracture walls relatively smooth. The compound effect of shear-thinning fluid nature and fracture heterogeneity promotes flow localization, which in turn affects the performance of industrial activities and remediation techniques. In Chapter 4, a Monte Carlo framework is adopted to produce multiple realizations of synthetic fractures, and analyze their ensemble statistics pertaining flow for a variety of real non-Newtonian fluids; the Newtonian case is used as a benchmark. In Chapter 5 and Chapter 6, a conceptual model of the hydro-mechanical aspects of backflow occurring in the last phase of hydraulic fracturing is proposed and experimentally validated, quantifying the effects of the relaxation induced by the flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The experimental projects discussed in this thesis are all related to the field of artificial molecular machines, specifically to systems composed of pseudorotaxane and rotaxane architectures. The characterization of the peculiar properties of these mechano-molecules is frequently associated with the analysis and elucidation of complex reaction networks; this latter aspect represents the main focus and central thread tying my thesis work. In each chapter, a specific project is described as summarized below: the focus of the first chapter is the realization and characterization of a prototype model of a photoactivated molecular transporter based on a pseudorotaxane architecture; in the second chapter is reported the design, synthesis, and characterization of a [2]rotaxane endowed with a dibenzylammonium station and a novel photochromic unit that acts as a recognition site for a DB24C8 crown ether macrocycle; in the last chapter is described the synthesis and characterization of a [3]rotaxane in which the relative number of rings and stations can be changed on command.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The frozen elephant trunk(FET) technique is one of the last evolution in the treatment of complex pathologies of the aortic arch and the descending thoracic aorta.Materials and methods: Between January 2007 and March 2021, a total of 396 patients underwent total aortic arch replacements with the FET technique in our centre.The main indications were thoracic aortic aneurysm(n=104,28.2%), chronic aortic dissection(n=224,53.4%) and acute aortic dissection(n=68, 18.4%). We divided the population in two groups according the position of the distal anastomosis (zone 2 vs zone 3) and the length of the stent graft (< 150 mm vs > 150 mm): conservative group (Zone 2 anastomosis + stent length < 150mm, n. 140 pts) and aggressive group (zone 3 anastomosis + stent length > 150mm, n. 141). Results: The overall 30-day mortality rate was 13%(48/369); the risk factor analysis showed that an aggressive approach was neither a risk factor for major complication (permanent dialysis, tracheostomy, bowel malperfusion and permanent paraplegia) neither for 30-day mortality. The survival rate at 1, 5,10 and 15 years was 87.7%,75%,61.3% and 58.4% respectively. During the follow up, an aortic reintervention was performed in 122 patients (38%), 5 patients received a non-aortic cardiac surgery. Freedom from aortic reintervention at 1-,5- and 10-year was 77%,54% and 44% respectively. The freedom from aortic reintervention was higher in the ‘aggressive’ group (62.5%vs40.0% at 5 years, log-rank=0.056). An aggressive approach was not protective for aortic reintervention at follow up and for death at follow up. Conclusions: The FET technique represents a feasible and efficient option in the treatment of complex thoracic aortic pathologies. An aortic reintervention after FET is very common and the decision-making approach should consider and balance the higher risk of an aggressive approach in terms of post-operative complication versus the higher risk of a second aortic reintervention at follow-up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transcription is controlled by promoter-selective transcriptional factors (TFs), which bind to cis-regulatory enhancers elements, termed hormone response elements (HREs), in a specific subset of genes. Regulation by these factors involves either the recruitment of coactivators or corepressors and direct interaction with the basal transcriptional machinery (1). Hormone-activated nuclear receptors (NRs) are well characterized transcriptional factors (2) that bind to the promoters of their target genes and recruit primary and secondary coactivator proteins which possess many enzymatic activities required for gene expression (1,3,4). In the present study, using single-cell high-resolution fluorescent microscopy and high throughput microscopy (HTM) coupled to computational imaging analysis, we investigated transcriptional regulation controlled by the estrogen receptor alpha (ERalpha), in terms of large scale chromatin remodeling and interaction with the associated coactivator SRC-3 (Steroid Receptor Coactivator-3), a member of p160 family (28) primary coactivators. ERalpha is a steroid-dependent transcriptional factor (16) that belongs to the NRs superfamily (2,3) and, in response to the hormone 17-ß estradiol (E2), regulates transcription of distinct target genes involved in development, puberty, and homeostasis (8,16). ERalpha spends most of its lifetime in the nucleus and undergoes a rapid (within minutes) intranuclear redistribution following the addition of either agonist or antagonist (17,18,19). We designed a HeLa cell line (PRL-HeLa), engineered with a chromosomeintegrated reporter gene array (PRL-array) containing multicopy hormone response-binding elements for ERalpha that are derived from the physiological enhancer/promoter region of the prolactin gene. Following GFP-ER transfection of PRL-HeLa cells, we were able to observe in situ ligand dependent (i) recruitment to the array of the receptor and associated coregulators, (ii) chromatin remodeling, and (iii) direct transcriptional readout of the reporter gene. Addition of E2 causes a visible opening (decondensation) of the PRL-array, colocalization of RNA Polymerase II, and transcriptional readout of the reporter gene, detected by mRNA FISH. On the contrary, when cells were treated with an ERalpha antagonist (Tamoxifen or ICI), a dramatic condensation of the PRL-array was observed, displacement of RNA Polymerase II, and complete decreasing in the transcriptional FISH signal. All p160 family coactivators (28) colocalize with ERalpha at the PRL-array. Steroid Receptor Coactivator-3 (SRC-3/AIB1/ACTR/pCIP/RAC3/TRAM1) is a p160 family member and a known oncogenic protein (4,34). SRC-3 is regulated by a variety of posttranslational modifications, including methylation, phosphorylation, acetylation, ubiquitination and sumoylation (4,35). These events have been shown to be important for its interaction with other coactivator proteins and NRs and for its oncogenic potential (37,39). A number of extracellular signaling molecules, like steroid hormones, growth factors and cytokines, induce SRC-3 phosphorylation (40). These actions are mediated by a wide range of kinases, including extracellular-regulated kinase 1 and 2 (ERK1-2), c-Jun N-terminal kinase, p38 MAPK, and IkB kinases (IKKs) (41,42,43). Here, we report SRC-3 to be a nucleocytoplasmic shuttling protein, whose cellular localization is regulated by phosphorylation and interaction with ERalpha. Using a combination of high throughput and fluorescence microscopy, we show that both chemical inhibition (with U0126) and siRNA downregulation of the MAP/ERK1/2 kinase (MEK1/2) pathway induce a cytoplasmic shift in SRC-3 localization, whereas stimulation by EGF signaling enhances its nuclear localization by inducing phosphorylation at T24, S857, and S860, known partecipants in the regulation of SRC-3 activity (39). Accordingly, the cytoplasmic localization of a non-phosphorylatable SRC-3 mutant further supports these results. In the presence of ERalpha, U0126 also dramatically reduces: hormone-dependent colocalization of ERalpha and SRC-3 in the nucleus; formation of ER-SRC-3 coimmunoprecipitation complex in cell lysates; localization of SRC-3 at the ER-targeted prolactin promoter array (PRL-array) and transcriptional activity. Finally, we show that SRC-3 can also function as a cotransporter, facilitating the nuclear-cytoplasmic shuttling of estrogen receptor. While a wealth of studies have revealed the molecular functions of NRs and coregulators, there is a paucity of data on how these functions are spatiotemporally organized in the cellular context. Technically and conceptually, our findings have a new impact upon evaluating gene transcriptional control and mechanisms of action of gene regulators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction. Postnatal neurogenesis in the hippocampal dentate gyrus, can be modulated by numerous determinants, such as hormones, transmitters and stress. Among the factors positively interfering with neurogenesis, the complexity of the environment appears to play a particularly striking role. Adult mice reared in an enriched environment produce more neurons and exhibit better performance in hippocampus-specific learning tasks. While the effects of complex environments on hippocampal neurogenesis are well documented, there is a lack of information on the effects of living under socio-sensory deprivation conditions. Due to the immaturity of rats and mice at birth, studies dealing with the effects of environmental enrichment on hippocampal neurogenesis were carried out in adult animals, i.e. during a period of relatively low rate of neurogenesis. The impact of environment is likely to be more dramatic during the first postnatal weeks, because at this time granule cell production is remarkably higher than at later phases of development. The aim of the present research was to clarify whether and to what extent isolated or enriched rearing conditions affect hippocampal neurogenesis during the early postnatal period, a time window characterized by a high rate of precursor proliferation and to elucidate the mechanisms underlying these effects. The experimental model chosen for this research was the guinea pig, a precocious rodent, which, at 4-5 days of age can be independent from maternal care. Experimental design. Animals were assigned to a standard (control), an isolated, or an enriched environment a few days after birth (P5-P6). On P14-P17 animals received one daily bromodeoxyuridine (BrdU) injection, to label dividing cells, and were sacrificed either on P18, to evaluate cell proliferation or on P45, to evaluate cell survival and differentiation. Methods. Brain sections were processed for BrdU immunhistochemistry, to quantify the new born and surviving cells. The phenotype of the surviving cells was examined by means of confocal microscopy and immunofluorescent double-labeling for BrdU and either a marker of neurons (NeuN) or a marker of astrocytes (GFAP). Apoptotic cell death was examined with the TUNEL method. Serial sections were processed for immunohistochemistry for i) vimentin, a marker of radial glial cells, ii) BDNF (brain-derived neurotrofic factor), a neurotrophin involved in neuron proliferation/survival, iii) PSA-NCAM (the polysialylated form of the neural cell adhesion molecule), a molecule associated with neuronal migration. Total granule cell number in the dentate gyrus was evaluated by stereological methods, in Nissl-stained sections. Results. Effects of isolation. In P18 isolated animals we found a reduced cell proliferation (-35%) compared to controls and a lower expression of BDNF. Though in absolute terms P45 isolated animals had less surviving cells than controls, they showed no differences in survival rate and phenotype percent distribution compared to controls. Evaluation of the absolute number of surviving cells of each phenotype showed that isolated animals had a reduced number of cells with neuronal phenotype than controls. Looking at the location of the new neurons, we found that while in control animals 76% of them had migrated to the granule cell layer, in isolated animals only 55% of the new neurons had reached this layer. Examination of radial glia cells of P18 and P45 animals by vimentin immunohistochemistry showed that in isolated animals radial glia cells were reduced in density and had less and shorter processes. Granule cell count revealed that isolated animals had less granule cells than controls (-32% at P18 and -42% at P45). Effects of enrichment. In P18 enriched animals there was an increase in cell proliferation (+26%) compared to controls and a higher expression of BDNF. Though in both groups there was a decline in the number of BrdU-positive cells by P45, enriched animals had more surviving cells (+63) and a higher survival rate than controls. No differences were found between control and enriched animals in phenotype percent distribution. Evaluation of the absolute number of cells of each phenotype showed that enriched animals had a larger number of cells of each phenotype than controls. Looking at the location of cells of each phenotype we found that enriched animals had more new neurons in the granule cell layer and more astrocytes and cells with undetermined phenotype in the hilus. Enriched animals had a higher expression of PSA-NCAM in the granule cell layer and hilus Vimentin immunohistochemistry showed that in enriched animals radial glia cells were more numerous and had more processes.. Granule cell count revealed that enriched animals had more granule cells than controls (+37% at P18 and +31% at P45). Discussion. Results show that isolation rearing reduces hippocampal cell proliferation but does not affect cell survival, while enriched rearing increases both cell proliferation and cell survival. Changes in the expression of BDNF are likely to contribute to he effects of environment on precursor cell proliferation. The reduction and increase in final number of granule neurons in isolated and enriched animals, respectively, are attributable to the effects of environment on cell proliferation and survival and not to changes in the differentiation program. As radial glia cells play a pivotal role in neuron guidance to the granule cell layer, the reduced number of radial glia cells in isolated animals and the increased number in enriched animals suggests that the size of radial glia population may change dynamically, in order to match changes in neuron production. The high PSA-NCAM expression in enriched animals may concur to favor the survival of the new neurons by facilitating their migration to the granule cell layer. Conclusions. By using a precocious rodent we could demonstrate that isolated/enriched rearing conditions, at a time window during which intense granule cell proliferation takes place, lead to a notable decrease/increase of total granule cell number. The time-course and magnitude of postnatal granule cell production in guinea pigs are more similar to the human and non-human primate condition than in rats and mice. Translation of current data to humans would imply that exposure of children to environments poor/rich of stimuli may have a notably large impact on dentate neurogenesis and, very likely, on hippocampus dependent memory functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.